[comp.ai] DVM's request for definitions

brian@caen.engin.umich.edu (Brian Holtz) (05/26/88)

will" and to discuss "what the consequences of adopting one of them 
would be for morality."  I'd like to do some of both.

It seems to me that determinists and anti-determinists have been arguing 
about different things when they argue about free will, namely:

1. determinist "free will": the ability to make at least some choices 
that are neither uncaused nor completely determined by physical forces

2. anti-determinist "free will": the ability to make at least some 
choices that are neither uncaused nor completely determined by external 
physical forces

(By an "uncaused choice" I mean a choice whose outcome depends on 
physical events that are in principle unpredictable.  I think this is 
what some people have meant when they call certain choices "random", and 
mention quantum effects.)

If you accept definition (1) (as I do), then the only alternative to 
determinism is dualism, which I don't see too many people defending.

If you accept definition (2) (as T. William Wells' posting suggested), 
then you run into the difficulty of distinguishing "external" and 
"internal" forces.  As Cliff Joslyn said in response to Mr. Wells, 

   we must say that reflexes, dreams, delusions, compulsions, etc., are
   all *OUTSIDE* of "me".  As the earlier lively conversation on whether
   thoughts can be controlled shows, we can carry this distinction-
   making on, narrowing the scope of the "willing agent" in the mind to
   a singularity (my Will), about which we cannot gather evidence about
   causal processes, nor make meaningful theories as to necessary and/or
   sufficient conditions.

I don't think this difficulty is insurmountable, and if psychologically 
principled distinctions can be made between the "willing agent" and the 
rest of "me", then here I think is consolation for those who do not wish 
to define "free will" out of existence.  They can just draw a line 
around some part of their minds, and declare that part free of external 
determination.  The internal determination that would would remain could 
be embraced as *self*-determination.  Thus I will take definition (2) as 
a definition not of free will, but self-determination.


I think we can preserve all the desirable moral consequences of free 
will if we replace "free will" in our ethics with a modified version of 
self-determination, which I will call volition.  The key to volition is 
that it should be operationally indistinguishable from free will.

3. volition: the ability to identify significant sets of options and to 
predict one's future choices among them, in the absence of any evidence 
that any other agent is able to predict those choices.

There are a lot of implications to replacing free will with my notion of 
volition, but I will just mention three.

- If my operationalization is a truly transparent one, then it is easy 
to see that volition (and now-dethroned free will) is incompatible with 
an omniscient god.  Also, anyone who could not predict his behavior as 
well as someone else could predict it would no longer be considered to 
have volition.

- The ethical problem of responsibility can still be managed if it is no 
longer seen to derive from a truly free will, but is rather assigned or 
claimed (as suggested by Kevin Brown's posting).

- Metaphysical arguments about the possible free will of machines would 
no longer be relevant. If a machine can predict its behavior better than 
anyone/thing else, it has volition; otherwise, it does not.

ok@quintus.UUCP (Richard A. O'Keefe) (05/26/88)

In article <894@maize.engin.umich.edu>, brian@caen.engin.umich.edu (Brian Holtz) writes:
> 3. volition: the ability to identify significant sets of options and to 
> predict one's future choices among them, in the absence of any evidence 
> that any other agent is able to predict those choices.
> 
> There are a lot of implications to replacing free will with my notion of 
> volition, but I will just mention three.
> 
> - If my operationalization is a truly transparent one, then it is easy 
> to see that volition (and now-dethroned free will) is incompatible with 
> an omniscient god.  Also, anyone who could not predict his behavior as 
> well as someone else could predict it would no longer be considered to 
> have volition.

Why does the other predictor have to be an AGENT?

"easy to see that ...?"  Nope.  You have to show that volition is
imcompatible with perfect PAST knowledge first...

"no longer considered to have volition ..."  I've just been reading a
book called "Predictable Pairing" (sorry, I've forgotten the author's)
name, and if he's right it seems to me that a great many people do
not have volition in this sense.  If we met Hoyle's "Black Cloud", and
it with its enormous computational capacity were to predict our actions
better than we did, would that mean that we didn't have volition any
longer, or that we had never had it?

What has free will to do with prediction?  Presumably a dog is not
self conscious or engaged in predicting its activities, but does that
mean that a dog cannot have free will?  

One thing I thought AI people were taught was "beware of the homunculus".
As soon as we start identifying parts of our mental activity as external
to "ourselves" we're getting into homunculus territory.

For me, "to have free will" means something like "to act in accord with
my own nature".  If I'm a garrulous twit, people will be able to predict
pretty confidently that I'll act like a garrulous twit (even though I
may not realise this), but since I will then be behaving as I wish I
will correctly claim free will.

jeff@aiva.ed.ac.uk (Jeff Dalton) (05/29/88)

In article <894@maize.engin.umich.edu> brian@caen.engin.umich.edu (Brian Holtz) writes:
>If you accept definition (1) (as I do), then the only alternative to 
>determinism is dualism, which I don't see too many people defending.

Dualism wouldn't necessarily give free will: it would just transfer
the question to the spiritual.  Perhaps that is just as deterministic
as the material.

brian@caen.engin.umich.edu (Brian Holtz) (05/31/88)

In article <1020@cresswell.quintus.UUCP>, ok@quintus.UUCP (Richard A. 
O'Keefe) writes:
> In article <894@maize.engin.umich.edu>, brian@caen.engin.umich.edu (Brian 
> Holtz) writes:
> > 3. volition: the ability to identify significant sets of options and to
> > predict one's future choices among them, in the absence of any evidence
> > that any other agent is able to predict those choices.
> >
> > There are a lot of implications to replacing free will with my notion of
> > volition, but I will just mention three.
> >
> > - If my operationalization is a truly transparent one, then it is easy
> > to see that volition (and now-dethroned free will) is incompatible with
> > an omniscient god.  Also, anyone who could not predict his behavior as
> > well as someone else could predict it would no longer be considered to
> > have volition.

[proceeding excerpts not in any particular order]

> For me, "to have free will" means something like "to act in accord with
> my own nature".  If I'm a garrulous twit, people will be able to predict
> pretty confidently that I'll act like a garrulous twit (even though I
> may not realise this), but since I will then be behaving as I wish I
> will correctly claim free will.

Recall that my definition of free will ("the ability to make at least some 
choices that are neither uncaused nor completely determined by physical 
forces") left little room for it to exist.  Your definition (though I doubt 
you will appreciate being held to it this strictly) leaves too much room: 
doesn't a falling rock, or the average computer program, "act in accord with 
[its] own nature"?

> One thing I thought AI people were taught was "beware of the homunculus".
> As soon as we start identifying parts of our mental activity as external
> to "ourselves" we're getting into homunculus territory.

I agree that homunculi are to be avoided; that is why I relegated "the 
ability to make at least some choices that are neither uncaused nor 
completely determined by *external* physical forces" to being a definition 
not of free will, but of "self-determination".  The free will that you are 
angling for sounds a lot like what I call self-determination, and I would 
welcome any efforts to sharpen the definition so as to avoid the 
externality/internality trap.  So until someone comes up with a definition 
of free will that is better than yours and mine, I think the best course is 
to define free will out of existence and take my "volition" as the 
operationalized designated hitter for free will in our ethics.

> What has free will to do with prediction?  Presumably a dog is not
> self conscious or engaged in predicting its activities, but does that
> mean that a dog cannot have free will?

Free will has nothing to do with prediction; volition does.  The question of 
whether a dog has free will is a simple one with either your definition *or* 
mine.  By my definition, nothing has free will; by yours, it seems to me 
that everything does. (Again, feel free to refine your definition if I've 
misconstrued it.)  A dog would seem to have self-determination as I've 
defined it, but you and I agree that my definition's reliance on 
ex/in-ternality makes it a suspect categorization.  A dog would clearly not 
have volition, since it can't make predictions about itself.  And since 
volition is what I propose as the predicate we should use in ethics, we are 
happily exempt from extending ethical personhood to dogs.

> "no longer considered to have volition ..."  I've just been reading a
> book called "Predictable Pairing" (sorry, I've forgotten the author's)
> name, and if he's right it seems to me that a great many people do
> not have volition in this sense.  If we met Hoyle's "Black Cloud", and
> it with its enormous computational capacity were to predict our actions
> better than we did, would that mean that we didn't have volition any
> longer, or that we had never had it?

A very good question.  It would mean that we no longer had volition, but 
that we had had it before.  My notion of volition is contingent, because it 
depends on "the absence of any evidence that any other agent is able to 
predict" our choices.  What is attractive to me about volition is that it 
would be very useful in answering ethical questions about the "free will" 
(in the generic ethical sense) of arbitrary candidates for personhood: if 
your AI system could demonstrate volition as defined, then your system would 
have met one of the necessary conditions for personhood.  What is unnerving 
to me about my notion of volition is how contingent it is: if Hoyle's "Black 
Cloud" or some prescient god could foresee my behavior better than I could, 
I would reluctantly conclude that I do not even have an operational 
semblence of free will.  My conclusion would be familiar to anyone who 
asserts (as I do) that the religious doctrine of predestination is 
inconsistent with believing in free will.  I won't lose any sleep over this, 
though; Hoyle's "Black Cloud" would most likely need to use analytical 
techniques so invasive as to leave little of me left to rue my loss of 
volition.