[sci.philosophy.tech] How to dispose of the free will issue

aarons@cvaxa.sussex.ac.uk (Aaron Sloman) (06/02/88)

(I wasn't going to contribute to this discussion, but a colleague
encouraged me. I haven't read all the discussion, so apologise if
there's some repetition of points already made.)

Philosophy done well can contribute to technical problems (as shown by
the influence of philosophy on logic, mathematics, and computing, e.g.
via Aristotle, Leibniz, Frege, Russell).

Technical developments can also help to solve or dissolve old
philosophical problems. I think we are now in a position to dissolve the
problems of free will as normally conceived, and in doing so we can make
a contribution to AI as well as philosophy.

The basic assumption behind much of the discussion of freewill is

    (A) there is a well-defined distinction between systems whose
    choices are free and those which are not.

However, if you start examining possible designs for intelligent systems
IN GREAT DETAIL you find that there is no one such distinction. Instead
there are many "lesser" distinctions corresponding to design decisions
that a robot engineer might or might not take -- and in many cases it is
likely that biological evolution tried both (or several) alternatives.

There are interesting, indeed fascinating, technical problems about the
implications of these design distinctions. Exploring them shows that
there is no longer any interest in the question whether we have free
will because among the REAL distinctions between possible designs there
is no one distinction that fits the presuppositions of the philosophical
uses of the term "free will". It does not map directly onto any one of
the many different interesting design distinctions. (A) is false.

"Free will" has plenty of ordinary uses to which most of the
philosophical discussion is irrelevant. E.g.

    "Did you go of your own free will or did she make you go?"

That question a well understood distinction between two possible
explanations for someone's action. But the answer "I went of my own free
will" does not express a belief in any metaphysical truth about human
freedom. It is merely a denial that certain sorts of influences
operated. There is no implication that NO causes, or no mechanisms were
involved.

This is a frequently made common sense distinction between the existence
or non-existence of particular sorts of influences on a particular
individual's action. However there are other deeper distinctions that
relate to to different sorts of designs for behaving systems.

The deep technical question that I think lurks behind much of the
discussion is

    "what kinds of designs are possible for agents and what are the
    implications of different designs as regards the determinants of
    their actions?"

I'll use "agent" as short for "behaving system with something like
motives". What that means is a topic for another day. Instead of one big
division between things (agents) with and things (agents) without free
will we'll then come up with a host of more or less significant
divisions, expressing some aspect of the pre-theoretical free/unfree
distinction. E.g. here are some examples of design distinctions (some
of which would subdivide into smaller sub-distinctions on closer
analysis):

- Compare (a) agents that are able simultaneously to store and compare
different motives with (b) agents that have no mechanisms enabling this:
i.e. they can have only one motive at a time.

- Compare (a) agents all of whose motives are generated by a single top
level goal (e.g. "win this game") with (b) agents with several
independent sources of motivation (motive generators - hardware or
software), e.g. thirst, sex, curiosity, political ambition, aesthetic
preferences, etc.

- Contrast (a) an agent whose development includes modification of its
motive generators and motive comparators in the light of experience with
(b) an agent whose generators and comparators are fixed for life
(presumably the case for many animals).

- Contrast (a) an agent whose motive generators and comparators change
partly under the influence of genetically determined factors (e.g.
puberty) with (b) an agent for whom they can change only in the light of
interactions with the environment and inferences drawn therefrom.

- Contrast (a) an agent whose motive generators and comparators (and
higher order motivators) are themselves accessible to explicit internal
scrutiny, analysis and change, with (b) an agent for which all the
changes in motive generators and comparators are merely uncontrolled
side effects of other processes (as in addictions, habituation, etc.)
[A similar distinction can be made as regards motives themselves.]

- Contrast (a) an agent pre-programmed to have motive generators and
comparators change under the influence of likes and dislikes, or
approval and disapproval, of other agents, and (b) an agent that is only
influenced by how things affect it.

- Compare (a) agents that are able to extend the formalisms they use for
thinking about the environment and their methods of dealing with it
(like human beings) and (b) agents that are not (most other animals?)

- Compare (a) agents that are able to assess the merits of different
inconsistent motives (desires, wishes, ideals, etc.) and then decide
which (if any) to act on with (b) agents that are always controlled by
the most recently generated motive (like very young children? some
animals?).

- Compare (a) agents with a monolithic hierarchical computational
architecture where sub-processes cannot acquire any motives (goals)
except via their "superiors", with only one top level executive process
generating all the goals driving lower level systems with (b) agents
where individual sub-systems can generate independent goals. In case
(b) we can distinguish many sub-cases e.g.
(b1) the system is hierarchical and sub-systems can pursue their
    independent goals if they don't conflict with the goals of their
    superiors
(b2) there are procedures whereby sub-systems can (sometimes?) override
    their superiors.

- Compare (a) a system in which all the decisions among competing goals
and sub-goals are taken on some kind of "democratic" voting basis or a
numerical summation or comparison of some kind (a kind of vector
addition perhaps) with (b) a system in which conflicts are resolved on
the basis of qualitative rules, which are themselves partly there from
birth and partly the product of a complex high level learning system.

- Compare (a) a system designed entirely to take decisions that are
optimal for its own well-being and long term survival with (b) a system
that has built-in mechanisms to ensure that the well-being of others is
also taken into account. (Human beings and many other animals seem to
have some biologically determined mechanisms of the second sort - e.g.
maternal/paternal reactions to offspring, sympathy, etc.).

- There are many distinctions that can be made between systems according
to how much knowledge they have about their own states, and how much
they can or cannot change because they do or do not have appropriate
mechanisms. (As usually there are many different sub-cases. Having
something in a write-protected area is different from not having any
mechanism for changing stored information at all.)

There are some overlaps between these distinctions, and many of them are
relatively imprecise, but all are capable of refinement and can be
mapped onto real design decisions for a robot-designer (or evolution).

They are just some of the many interesting design distinctions whose
implications can be explored both theoretically and experimentally,
though building models illustrating most of the alternatives will
require significant advances in AI e.g. in perception, memory, learning,
reasoning, motor control, etc.

When we explore the fascinating space of possible designs for agents,
the question which of the various sytems has free will loses interest:
the pre-theoretic free/unfree contrast totally fails to produce any one
interesting demarcation among the many possible designs -- it can be
loosely mapped on to several of them.

So the design distinctions define different notions of free:- free(1),
free(2), free(3), .... However, if an object is free(i) but not free(j)
(for i /= j) then the question "But is it really FREE?" has no answer.

It's like asking: What's the difference between things that have life and
things that don't?

The question is (perhaps) OK if you are contrasting trees, mice and
people with stones, rivers and clouds. But when you start looking at a
larger class of cases, including viruses, complex molecules of various
kinds, and other theoretically possible cases, the question loses its
point because it uses a pre-theoretic concept ("life") that doesn't have
a sufficiently rich and precise meaning to distinguish all the cases
that can occur. (Which need not stop biologists introducing a new
precise and technical concept and using the word "life" for it. But that
doesn't answer the unanswerable pre-theoretical question about precisely
where the boundary lies.

Similarly "what's the difference between things with and things without
free will?" This question makes the false assumpton (A).

So, to ask whether we are free is to ask which side of a boundary we are
on when there is no particular boundary in question. (Which is one
reason why so many people are tempted to say "What I mean by free is..."
and they then produce different incompatible definitions.)

I.e. it's a non-issue. So let's examine the more interesting detailed
technical questions in depth.

(For more on motive generators, motive comparators, etc. see my (joint)
article in IJCAI-81 on robots and emotions, or the sequel "Motives,
Mechanisms and Emotions" in the journal of Cognition and Emotion Vol I
no 3, 1987).

Apologies for length.

Now, shall I or shan't I post this.........????

Aaron Sloman,
School of Cognitive Sciences, Univ of Sussex, Brighton, BN1 9QN, England
    ARPANET : aarons%uk.ac.sussex.cvaxa@nss.cs.ucl.ac.uk
              aarons%uk.ac.sussex.cvaxa%nss.cs.ucl.ac.uk@relay.cs.net
    JANET     aarons@cvaxa.sussex.ac.uk
    BITNET:   aarons%uk.ac.sussex.cvaxa@uk.ac
        or    aarons%uk.ac.sussex.cvaxa%ukacrl.bitnet@cunyvm.cuny.edu
As a last resort (it costs us more...)
    UUCP:     ...mcvax!ukc!cvaxa!aarons
            or aarons@cvaxa.uucp

jeff@aiva.ed.ac.uk (Jeff Dalton) (07/06/88)

In article <794@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>Whether or not we have free will, we should behave as if we do,
>because if we don't, it doesn't matter.

If that is true -- if it doesn't matter -- then we will do just as well
to behave as if we do not have free will.

.

markb@sdcrdcf.UUCP (Mark Biggar) (07/08/88)

In article <488@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva (Jeff Dalton,E26 SB x206E,,2295119) writes:
>In article <794@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>>Whether or not we have free will, we should behave as if we do,
>>because if we don't, it doesn't matter.
>If that is true -- if it doesn't matter -- then we will do just as well
>to behave as if we do not have free will.

Not so, believing in free will is a no lose situation; while
believing that you don't have free is a no win situation.
In the first case either your right or it doesn't matter, in the second
case either your wrong or it doesn't matter.  Game theory (assuming
you put more value on being right then wrong (if it doesn't matter
there are no values anyway)) says the believing and acting like you
have free will is the way that has the most expected return.

Mark Biggar
{allegra,burdvax,cbosgd,hplabs,ihnp4,akgua,sdcsvax}!sdcrdcf!markb
markb@rdcf.sm.unisys.com

bc@mit-amt.MEDIA.MIT.EDU (bill coderre) (07/09/88)

In article <5384@sdcrdcf.UUCP> markb@sdcrdcf.UUCP (Mark Biggar) writes:
>In article <488@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva (Jeff Dalton,E26 SB x206E,,2295119) writes:
>>In article <794@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>>>Whether or not we have free will, we should behave as if we do,
>>>because if we don't, it doesn't matter.
>>If that is true -- if it doesn't matter -- then we will do just as well
>>to behave as if we do not have free will.
>Not so, believing in free will is a no lose situation; while
>believing that you don't have free is a no win situation.




Whereas arguing about free will is a no-win situation.

Arguing about free will is also certainly not AI.

Thank you for your consideration.



mr bc

bill@proxftl.UUCP (T. William Wells) (07/11/88)

In article <5384@sdcrdcf.UUCP>, markb@sdcrdcf.UUCP (Mark Biggar) writes:
> In article <488@aiva.ed.ac.uk> jeff@uk.ac.ed.aiva (Jeff Dalton,E26 SB x206E,,2295119) writes:
> >In article <794@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
> >>Whether or not we have free will, we should behave as if we do,
> >>because if we don't, it doesn't matter.
> >If that is true -- if it doesn't matter -- then we will do just as well
> >to behave as if we do not have free will.
>
> Not so, believing in free will is a no lose situation; while
> believing that you don't have free is a no win situation.
> In the first case either your right or it doesn't matter, in the second
> case either your wrong or it doesn't matter.  Game theory (assuming
> you put more value on being right then wrong (if it doesn't matter
> there are no values anyway)) says the believing and acting like you
> have free will is the way that has the most expected return.

Pascal, I think it was, advanced essentially the same argument in
order to defend the proposition that one should believe in god.

However, both sides of the argument agree that the issue at hand
has no satisfactory resolution, and thus we are free to be
religious about it; both are also forgetting that the answer to
this question has practical consequences.

Pick your favorite definition of free will. Unless it is one
where the "free will" has no causal relationship with the rest
of the world (but then why does it matter?), the existence or
lack of existence of free will will have measurable consequences.

For example, my own definition of free will has consequences
that, among many other things, includes the proposition that,
under normal circumstances, an initiation of physical force is
harmful both to the agent and the patient. (Do not argue this
proposition in this newsgroup, PLEASE.) It also entails a
definition of the debatable terms like `normal' and `harm' by
means of which this statement can be interpreted. This means
that I can test the validity of my definition of free will by
normal scientific means and thus takes the problem of free will
out of the religious and into the practical.

gsmith@garnet.berkeley.edu (Gene W. Smith) (07/11/88)

In article <445@proxftl.UUCP>, bill@proxftl (T. William Wells) writes:

>Pick your favorite definition of free will. Unless it is one
>where the "free will" has no causal relationship with the rest
>of the world (but then why does it matter?), the existence or
>lack of existence of free will will have measurable consequences.

  Having a causal connection to the rest of the world is not the
same as having measurable consequences, so this argument won't
work. One possible definition of free will (with problems, but
don't let that worry us) is that there is no function (from
possible internal+external states to behavior, say) which
determines what the free will agent will do. To to test this is
to test a negative statement about the lack of a function, which
seems hard to do, to say the least.

>For example, my own definition of free will has consequences
>that, among many other things, includes the proposition that,
>under normal circumstances, an initiation of physical force is
>harmful both to the agent and the patient. (Do not argue this
>proposition in this newsgroup, PLEASE.) It also entails a
>definition of the debatable terms like `normal' and `harm' by
>means of which this statement can be interpreted. This means
>that I can test the validity of my definition of free will by
>normal scientific means and thus takes the problem of free will
>out of the religious and into the practical.

  This is such a weak verification of your free will hypothesis
as to be nearly useless, even if I accept that you are able to
make the deduction you claim. Freud claimed that psychoanalysis
was a science, deducing all kinds of things from his egos and his
ids. But he failed to show his explanations were to be preferred
to the possible alternatives; in other words, to show his ideas
had any real explanatory power. You would need to show your
ideas, whatever they are, had genuine explanatory power to claim
you had a worthwhile scientific theory.
--
ucbvax!garnet!gsmith    Gene Ward Smith/Garnet Gang/Berkeley CA 94720
"Some people, like Chuq and Matt Wiener, naturally arouse suspicion by
behaving in an obnoxious fashion." -- Timothy Maroney, aka Mr. Mellow

ddb@ns.ns.com (David Dyer-Bennet) (07/12/88)

In article <488@aiva.ed.ac.uk>, jeff@aiva.ed.ac.uk (Jeff Dalton) writes:
> In article <794@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
> >Whether or not we have free will, we should behave as if we do,
> >because if we don't, it doesn't matter.
> If that is true -- if it doesn't matter -- then we will do just as well
> to behave as if we do not have free will.
  While I would prefer to avoid *ALL* errors, I'll settle for avoiding
all *AVOIDABLE* erors.  If I do not have free will, none of my errors
are avoidable (I had no choice, right?); so I may as well remove the entire
no-free-will arena from my realm of consideration.
  The whole concept of "choosing to believe we have no free will" is
obviously bogus -- if we're choosing, then by definition we DO have free will.
  I understand, of course, that you all my be pre-destined not to comprehend
my arguments :-)
-- 
	-- David Dyer-Bennet
	...!{rutgers!dayton | amdahl!ems | uunet!rosevax}!umn-cs!ns!ddb
	ddb@viper.Lynx.MN.Org, ...{amdahl,hpda}!bungia!viper!ddb
	Fidonet 1:282/341.0, (612) 721-8967 hst/2400/1200/300

ddb@ns.ns.com (David Dyer-Bennet) (07/12/88)

In article <445@proxftl.UUCP>, bill@proxftl.UUCP (T. William Wells) writes:
> For example, my own definition of free will has consequences
> that,.... This means
> that I can test the validity of my definition of free will by
> normal scientific means and thus takes the problem of free will
> out of the religious and into the practical.
  Yep, that's what you'd need to have to take the debate out of the
religious and into the practical.  Not meaning to sound sarcastic, but
this is a monumental philosophical breathrough.  But could you exhibit
some of the difficult pieces of this theory; in particular, what is
the measurable difference between an action taken freely, and one that
was pre-determined by other forces?
-- 
	-- David Dyer-Bennet
	...!{rutgers!dayton | amdahl!ems | uunet!rosevax}!umn-cs!ns!ddb
	ddb@viper.Lynx.MN.Org, ...{amdahl,hpda}!bungia!viper!ddb
	Fidonet 1:282/341.0, (612) 721-8967 hst/2400/1200/300

logajan@ns.ns.com (John Logajan x3118) (07/12/88)

The no-free-will theory is untestable.
The free-will theory is like-wise untestable.
When the no-free-will theorists are not thinking about their lack of free will
they invariably adopt free-will outlooks.
So go with the flow, why fight your natural instincts to believe in that which
is un-provable.  If you must choose between un-provable beliefs, take the one
that requires the least effort.

- John M. Logajan @ Network Systems; 7600 Boone Ave; Brooklyn Park, MN 55428 -
- ...amdahl!bungia!ns!logajan, {...uunet, ...rutgers} !umn-cs!ns!logajan     -

logajan@ns.ns.com (John Logajan x3118) (07/14/88)

Since we are asked to believe in unprovable things, such as the no-free-will
theory (or the free-will theory for that matter) why not believe in every
unproveable theory.

Just try combining the deterministic theory with the many worlds theory. In
many worlds, at each instant the universe splits into an infinite number of
alternate universes, each one taking a slightly different 'turn'.  i.e. in
one universe I get killed, in another I don't etc.  Each sub-universe futher
splits into an infinite number and so on.

You can argue determinism both ways here.  After all every possibility is
addressed, and so it is deterministic in some sense and yet it isn't.

My point is that unproveable theories aren't very useful.

- John M. Logajan @ Network Systems; 7600 Boone Ave; Brooklyn Park, MN 55428 -
- {...rutgers!umn-cs, ...amdahl!bungia, ...uunet!rosevax!bungia} !ns!logajan -

ed@maven.UUCP (Ed Hand) (07/14/88)

~r dest

ed@maven.UUCP (Ed Hand) (07/14/88)

In article <611@maven.UUCP>, ed@maven.UUCP (Ed Hand) writes:
> 
> ~r dest
> 

    Sorry about that folks.  I guess my editor didn't pick up the my text file.

					Ed Hand.
			It wasn't the devil made me do it, it was destiny!

dswinney@icc.afit.arpa (David V. Swinney) (07/20/88)

In article <407@ns.ns.com> logajan@ns.ns.com (John Logajan x3118) writes:
>
>The no-free-will theory is untestable.
>The free-will theory is like-wise untestable.
>When the no-free-will theorists are not thinking about their lack of free will
>they invariably adopt free-will outlooks.
>So go with the flow, why fight your natural instincts to believe in that which
>is un-provable.  If you must choose between un-provable beliefs, take the one
>that requires the least effort.
>
I contend that the use of the phrase "free will" is misleading.  No one
(at least no one I know of) believes in *FREE* will.  
The real question is  "To what extent is the universe deterministic?".

We all (?) believe that our decisions are based on our past experience
and our personality (read genetics or spirit depending on where you are
arguing from).  Thus the question is *not* whther or not we make choices, 
but rather whether or not our decision is partially or completely
determined by our prior training and nature.

The "free-will" theorists hold that are choices are only partially
deterministic and partially random.

The "no-free-will" theorists hold that are choices are completely
deterministic with no random component.

The shadings along the way tell you whether to punish crime (add negative
experiences to change behavior) or to ignore it completely (past input
makes no difference to a fully free will).

As I said before, I know no one who believes in completely free will
but the previous example indicates that the question can not be eliminated
by pretending that only two sides of the argument exist.



The opinions I express are my own...unless they prove to be wrong (in which
case I didn't really write this.)
D.V.Swinney     dswinney@galaxy.afit.af.mil

sarge@metapsy.UUCP (Sarge Gerbode) (07/25/88)

In article <421@afit-ab.arpa> dswinney@icc.UUCP (David V. Swinney) writes:
>The "free-will" theorists hold that are choices are only partially
>deterministic and partially random.
>
>The "no-free-will" theorists hold that are choices are completely
>deterministic with no random component.

If my actions were random, I would not consider myself to have "free will".
Only if my actions were self-determined would I so consider myself.  As Bohm
pointed out: "The laws of chance are just as necessary as the causal laws
themselves." [*Causality and Chance in Modern Physics*]

I think most would agree that we have at least some degree of self-determinism,
and beyond that, we have some degree of causativeness over our own natures,
e.g. our habits and our understanding.  That is the basis upon which laws
concerning negligence rest.

How far this "second-order" self-determinism extends is an open question, but
the issue of randomness doesn't, I think, enter into it.
-- 
Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
Institute for Research in Metapsychology
950 Guinda St.  Palo Alto, CA 94301
-- 
Sarge Gerbode -- UUCP:  pyramid!thirdi!metapsy!sarge
Institute for Research in Metapsychology
950 Guinda St.  Palo Alto, CA 94301