[comp.ai.digest] free will

GODDEN@gmr.COM (05/05/88)

$0.02: "The Mysterious Stranger" by Mark Twain is a novella dealing with
free will and determinism that the readers of this list may find interesting.

AIList-REQUEST@AI.AI.MIT.EDU (AIList Moderator Nick Papadakis) (05/25/88)

Return-Path: <@AI.AI.MIT.EDU:ailist-request@ai.ai.mit.edu>
Date: 14 May 88 22:46:42 GMT
From: sunybcs!stewart@boulder.colorado.edu  (Norman R. Stewart)
Subject: Re: Free Will
Sender: ailist-request@ai.ai.mit.edu
To: ailist@ai.ai.mit.edu


>From: paulg@iisat.UUCP (Paul Gauthier)

writes:
> I'm sorry, but there is no free will. Every one of us is bound by the
>laws of physics. No one can lift a 2000 tonne block of concrete with his
>bare hands. No one can do the impossible, and in this sense none of us have
>free will.
 
     I don't believe we're concerned with what we are capable of doing,
but rather our capacity to desire to do it.  Free will is a mental, not
a physical phenomenom.  What we're concerned with is if the brain (nervous
system, organism, aggregation of organisms and objects) is just so many
atoms (sub-atomic particles?, sub-sub-atomic particles) bouncing around 
according to the laws of physics, and behavior simply the unalterable
manifestion of the movement of these particles.              /|\
                                                              |
                                                      Note: in a closed system.





Norman R. Stewart Jr.             *
C.S. Grad - SUNYAB                *  If you want peace, demand justice.
internet: stewart@cs.buffalo.edu  *                  (of unknown origin)  
bitnet:   stewart@sunybcs.bitnet  * 

NICK@AI.AI.MIT.EDU (Nick Papadakis) (05/28/88)

Date: Wed, 25 May 88 10:32 EDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: free will 
To: ailist@AI.AI.MIT.EDU

The following propositions are elaborated in

{\bf McCarthy, John and P.J. Hayes (1969)}:  ``Some Philosophical Problems from
the Standpoint of Artificial Intelligence'', in D. Michie (ed), {\it Machine
Intelligence 4}, American Elsevier, New York, NY.

I would be grateful for discussion of them - especially technical discussion.

1. For AI, the key question concerning free will is "What view should
we program a robot to have of its own free will?".  I believe my
proposal for this also sheds light on what view we humans should take
of our own free will.

2. We have a problem, because if we put the wrong assertions in our
database of common sense knowledge,  a logic-based robot without a
random element might conclude that since it is a deterministic robot,
it doesn't make sense for it to consider alternatives.  It might reason:
"Since I'm a robot, what I will do is absolutely determined, so any
consideration of whether one course of action or another would
violate (for example) Asimov's suggestion that robots shouldn't
harm human beings is pointless".

3. Actually (McCarthy and Hayes 1969) considered an even more
deterministic system than a robot in the world - namely a system
of interconnected finite automata and asked the question:  "When
should we say that in a given initial situation, automaton 1
can put automaton 7 in state 3 by time 10?"

4. The proposed answer makes this a definite question about
another automaton system, namely a system in which automaton
1 is removed from the original system, and its output lines
are replaced by external inputs to the revised system.  We
then say that automaton 1 can put automaton 7 in state 3
by time 10 provided there is a sequence of signals on the
external inputs to the revised system that will do it.

5. I claim this is how we want the robot to reason.  We should program it
to decide what it can do, i.e. the variety of results it can achieve, by
reasoning that doesn't involve its internal structure but only its place
in the world.  Its program should then decide what to do based on
what will best achieve the goals we have also put in its database.

6. I claim that my own reasoning about what I can do proceeds similarly.
I model the world as a system of interacting parts of which
I am one.  However, when deciding what to do, I use a model in
which my outputs are external inputs to the system.

7. This model says that I am free to do those things that suitable
outputs will do in the revised system.  I recommend
that any "impressionable students" in the audience take the same
view of their own free will.  In fact, I'll claim they already do;
unless mistaken philosophical considerations have given them
theories inferior to the most naive common sense.

8. The above treats "physical ability".  An elaboration involving
knowledge, i.e. that distinguishes my physical ability to dial
your phone number from my epistemological ability that requires
knowing the number, is discussed in the paper.

These views are compatible with Dennett's and maybe Minsky's.
In my view, McDermott's discussion would be simplified if he
incorporated discussion of the revised automaton system.

NICK@AI.AI.MIT.EDU (Nick Papadakis) (05/28/88)

From: Drew McDermott <mcdermott-drew@YALE.ARPA>
Full-Name: Drew McDermott
Date: Wed, 25 May 88 11:26 EDT
Subject: free will
To: ailist@ai.ai.mit.edu

I would like to suggest a more constrained direction for the discussion
about free will.  In response to my proposal, Harry Plantinga wrote:

   As an argument that people don't have free will in the common sense,
   this would only be convincing to ... someone who already thinks people 
   don't have free will.

I believe most of the confusion about this concept comes from there not
being any agreed-upon "common sense" of the term "free will."  To the
extent that there is a common consensus, it is probably in favor of
dualism, the belief that the absolute sway of physical law
stops at the cranium.  Unfortunately, ever since the seventeenth century,
the suspicion has been growing among the well informed that this kind of
dualism is impossible.  And that's where the free-will problem comes
from; we seem to make decisions, but how is that possible in a world
completely describable by physics?

If we want to debate about AI versus dualism (or, to be generous to
Mr. Cockton et al., AI versus something-else-ism), we can.  I don't view
the question as at all settled.  However, for the purposes of this 
discussion we ought to pretend it is settled, and avoid getting
bogged down in a general debate about whether AI is possible at
all.  Let's assume it is, and ask what place free will would have
in the resulting world view.  This attitude will inevitably require
that we propose technical definitions of free will, or propose dispensing
with the concept altogether.  Such definitions must do violence to
the common meaning of the term, if only because they will lack the
vagueness of the common meaning.  But science has always operated this
way.  

I count four proposals on the table so far:

1. (Propose by various people) Free will has something to do with randomness.

2. (McCarthy and Hayes) When one says "Agent X can do action A," or 
"X could have done A," one is implicitly picturing a situation in which X 
is replaced by an agent X' that can perform the same behaviors as X, but 
reacts to its inputs differently.  Then "X can do A" means "There is an X' 
that would do A." It is not clear what free will comes to in this theory.

3. (McDermott) To say a system has free will is to say that it is
"reflexively extracausal," that is, that it is sophisticated enough
to think about its physical realization, and hence (to avoid inefficacy)
that it must realize that this physical realization is exempt from
causal modeling.

4. (Minsky et al.) There is no such thing as free will.  We can dispense
with the concept, but for various emotional reasons we would rather not.

I will defend my theory at greater length some other time.  Let me confine
myself here to attacking the alternatives.  The randomness theory has
the problem that it presents a necessary, but presumably not sufficient,
condition for a system to have free will.  It is all very well to say
that a coin "chose to come up heads," but I would prefer a theory that
would actually distinguish between systems that make decisions and those
that don't.  This is not (prima facie) a mystical distinction; a stock-index
arbitrage program decides to buy or sell, at least at first blush, whereas
there is no temptation to say a coin decides anything.  The people in the
randomness camp owe us an account of this distinction.

I don't disagree with McCarthy and Hayes's idea, except that I am not
sure exactly whether they want to retain the notion of free will.

Position (4) is to dispense with the idea of free will altogether.  I
am half in favor of this.  I certainly think we can dispense with the
notion of "will"; having "free will" is not having a will that is free,
as opposed to brutes who have a will that is not free.  But it seems
that it is incoherent to argue that we *should* dispense with the idea
of free will completely, because that would mean that we shouldn't use
words like "should."  Our whole problem is to preserve the legitimacy
of our usual decision-making vocabulary, which (I will bet any amount)
everyone will go on using no matter what we decide.

Furthermore, Minsky's idea of a defense mechanism to avoid facing the
consequences of physics seems quite odd.  Most people have no need for
this defense mechanism, because they don't understand physics in the 
first place.  Dualism is the obvious theory for most people.  Among 
the handful who appreciate the horror of the position physics has put
us in, there are plenty of people who seem to do fine without the
defense mechanism (including Minsky himself), and they go right on
talking as if they made decisions.  Are we to believe that sufficient
psychotherapy would cure them of this?  

To summarize, I would like to see discussion confined to technical
proposals regarding these concepts, and what the consequences of adopting
one of them would be for morality.  Of course, what I'll actually see
is more meta-discussion about whether this suggestion is reasonable.

By the way, I would like to second the endorsement of Dennett's book 
about free will, "Elbow Room," which others have recommended.  I thank 
Mr. Rapoport for the reading list.  I'll return the favor with a reference 
I got from Dennett's book:

D.M. Mackay 1960 On the logical indeterminacy of a free choice.  {\it Mind
\bf 69}, pp. 31--40

Mackay points out that someone could predict my behavior, but that
  (a) It would be misleading to say I was "ignorant of the truth" about
      the prediction, because I couldn't be told the truth without
      changing it.
  (b) Any prediction would be conditional on the predictor's decision
      not to tell me about it.  



-------

NICK@AI.AI.MIT.EDU (Nick Papadakis) (05/28/88)

Date: Thu, 26 May 88 14:16 EDT
From: Carl DeFranco <DEFRANCO@RADC-TOPS20.ARPA>
Subject: Free Will
To: ailist@AI.AI.MIT.EDU


       The long standing discussion of Free Will has left me a little
       confused.  I had always considered Free Will a subject for the
       philosophers and theologians, as in their knowledge domain,
       the concept allows attribution of responsibility to
       individuals for their actions.  Thus, theologians may discuss
       the concept of Sin as being a freely chosen action contrary to
       some particular moral code.  Without Free Will, there is no
       sin, and no responsibility for action, since the alternatives
       are total determinism, total randomness, or some mix of the
       two to allow for entropy.

       For my two cents, I have accepted Free Will as the ability to
       make a choice between two (or possibily more) available
       courses of action.  This precludes such silly notions as the
       will to defy gravity.  Free will applies to choice, not to
       plausibility.  These choices will be guided by the indivdual's
       experience and environment, a knowledge base if you would,
       that provide some basis for evaluating the choices and the
       consequences of choosing.  To the extent that an individual
       has been trained toward a particular behavior pattern, his/her
       choices may be predicted with some probability.  In other
       circumstances, where there is no experience base or training,
       choices will appear to be more random.

       In general, people do what they please OR what pleases them.
       It is this background guidance that changes from time to time,
       and inserts the mathematical randomity into whatever model
       used to predict behavior.  Today it may please me to follow
       the standard way of thinking in exploring some concept.
       Tomorrow I may be more pleased to head off in some oddball
       direction.  It is this "Free Will" choice, in my view, that
       creates the intelligence in human beings.  We may take in
       information and examine it from several points of view,
       selecting a course of action from those points, and adding the
       results to our experience., i.e. we learn.

       As I follow AI from the sideline in my job, I won't presume to
       prescribe The Answer, but it would appear that true Artificial
       Intelligence can be given to a computer when:
            1.  It can learn from its experience.
            2.  It can test a "What If" from its knowledge.
            3.  There is some limited range of allowable random
                selection.
       Perhaps I take a simplistic view, but there appear to be a
       number of one-sided viewpoints, either philosophic or
       technical.

       Carl DeFranco
       defranco@radc-tops20.arpa
-------

NICK@AI.AI.MIT.EDU (Nick Papadakis) (06/04/88)

Date: Fri, 3 Jun 88 19:18 EDT
From: hayes.pa@Xerox.COM
Subject: Re: Free Will
To: AIList@AI.AI.MIT.EDU

Drew McDermott has written a lucidly convincing account of an AI approach to
what could be meant by `free will'.  Now, can we please move the rest of this
stuff - in particular, anything which brings in such topics as:  a decent world
to live in , Hitler and Stalin , Spanking , an omniscient god[sic] , ethics,
Hoyle's "Black Cloud",  sin, and laws, and purpose, and the rest of what
Vedanta/Budhists would call the "Illusion", Dualism  or  the soul - to somewhere
else; maybe talk. philosophy, but anywhere except here.

Pat Hayes

NICK@AI.AI.MIT.EDU (Nick Papadakis) (06/06/88)

Date: Sun, 5 Jun 88 00:44 EDT
From: Raymond E. Levitt <LEVITT@Score.Stanford.EDU>
Subject: Free Will
To: ailist@AI.AI.MIT.EDU

Raymond E. Levitt
Associate Professor
Center for Integrated Facility Engineering
Departments of Civil Engineering and Computer Science
Stanford University
==============================================================

Several colleagues and I would like to request that the free will debate -
which seems endless - be set up on a different list with one of the more
active contributors as a coordinator.  

The value of the AILIST as a source of current AI research issues, conferences,
software queries and evaluations, etc., is diminished for us by having to 
plough through the philosophical dialectic in issue after issue of the AILIST.

Perhaps you could run this message and take a poll of LIST readers to help
decide this in a democratic way.

Thanks for taking on the task of coordinating the AILIST.  It is a great
service to the community.

Ray Levitt
-------


   [Editor's Note:
   
   	Thank you, Mr. Levitt, and many thanks to all those who have
   written expressing interest or comments regarding AIList.  I regret that
   I have not had time to respond to many of you individually, as I have
   lately been more concerned with the simple mechanics of generating
   digests and dealing with the average of sixty bounce messages per day
   than with the more substantive issues of moderation.
   
   	However, a new COMSAT mail-delivery program is now orbiting, and
   we may perhaps be able to move away from the days of lost messages,
   week-long delays, and 50K digests ...  My heartfelt apologies to all.
   
   	Being rather new at this job, I have hesitated to express my
   opinion with respect to the free-will debate, preferring to retain the
   status quo and hoping that the problem would fix itself.   But since Mr.
   Levitt is only the latest of several people who have complained about
   this particular issue, I feel I must take some action.
   
   	Clearly this discussion is interesting and valuable to many of
   the participants, but equally clearly it is less so for many others.  I
   have tried as far as possible to group the free-will discussions in
   digests apart from other matters, so people uninterested in the topic
   could simply 'delete' the offending digests unread.  (There are many
   readers who only have access to the undigested stream and cannot do
   this.)
   
	Several people have suggested moving the discussion to a USENET
   list called 'tallk.philosophy'.  The difficulty here is that AIList
   crosses USENET, INTERNET and BITNET, and not all readers would be able
   to contribute.  In V7#6, John McCarthy <JMC@SAIL.Stanford.EDU> said:
   
   > I am not sure that the discussion should progress further, but if
   > it does, I have a suggestion.  Some neutral referee, e.g. the moderator,
   > should nominate principal discussants.  Each principal discussant should
   > nominate issues and references.  The referee should prune the list
   > of issues and references to a size that the discussants are willing
   > to deal with.  They can accuse each other of ignorance if they
   > don't take into account the references, however perfunctorily.
   > Each discussant writes a general statement and a point-by-point
   > discussion of the issues at a length limited by the referee in
   > advance.  Maybe the total length should be 20,000 words,
   > although 60,000 would make a book.  After that's done we have another
   > free-for-all.  I suggest four as the number of principal discussants
   > and volunteer to be one, but I believe that up to eight could
   > be accomodated without making the whole thing too unwieldy.
   > The principal discussants might like help from their allies.
   > 
   > The proposed topic is "AI and free will".
   
   	I would be more than willing to coordinate this effort, but I
   have, as yet, received no responses expressing an opinion one way or the
   other.  I invite the readers of AIList who have found the free-will
   discussion interesting (as opposed to those who have not) to send me net
   mail at AILIST-REQUEST@AI.AI.MIT.EDU concerning the future of this
   discussion.  Please send me a separate message, and do not intersperse
   your comments with other contributions, whether to the free-will debate
   or other matters.
   
   	In the meantime, I will continue to send out digests covering
   the free-will topic, although separate from other material.
   
   		- nick  ]
   
   
   

YLIKOSKI@FINFUN.BITNET (Antti Ylikoski tel +358 0 457 2704) (06/17/88)

X-Delivery-Notice:  SMTP MAIL FROM does not correspond to sender.
Date: Tue, 14 Jun 88 07:37 EDT
From: Antti Ylikoski tel +358 0 457 2704 <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject:  RE: Free will
To: AIList@AI.AI.MIT.EDU
X-VMS-To: IN%"AIList@AI.AI.MIT.EDU",YLIKOSKI

That would seem to make sense.

I'm a spirit/body dualist; humans have a spirit or a soul,
we have not so far made a machine with one; bue we can make
new souls (new humans).

Then the idea arises that one could model the human soul.

Antti Ylikoski

JMC@SAIL.STANFORD.EDU (John McCarthy) (07/27/88)

Date: Sun, 24 Jul 88 18:26 EDT
From: John McCarthy <JMC@SAIL.Stanford.EDU>
Subject: free will 
To: AIList@AI.AI.MIT.EDU

[In reply to message sent Sun 24 Jul 1988 02:00-EDT.]

Almost all the discussion is too vague to be a contribution.  Let me
suggest that AI people concentrate their attention on the question of how
a deterministic robot should be programmed to reason about its own free
will, as this free will relates both to its past choices and to its future
choices.  Can we program it to do better in the future than it did in the
past by reasoning that it could have done something different from what it
did, and this would have had a better outcome?  If yes, how should it be
programmed?  If no, then doesn't this make robots permanently inferior to
humans in learning from experience?

Philosophers may be excused.  They are allowed take the view that
the above questions are too grubbily technical to concern them.

GKMARH@IRISHMVS.BITNET (steven horst 219-289-9067) (07/27/88)

X-Delivery-Notice:  SMTP MAIL FROM does not correspond to sender.
Date: Mon, 25 Jul 88 14:22 EDT
To: ailist@ai.ai.mit.edu
From: steven horst 219-289-9067 <GKMARH%IRISHMVS.BITNET@MITVMA.MIT.EDU>
Subject: Free Will (long)

A few quibbles about some characterizations of free will and
related problems:
(1) D.V.Swinney (dsinney@galaxy.afit.af.mil) writes:
> The "free-will" theorists hold that are (sic) choices are only
> partially deterministic and partially random.
>
> The "no-free-will" theorists hold that are (sic) choices are
> completely deterministic with no random component.

I'm not really sure whether Swinney means to equate free will with
randomness, but if he does he is surely mistaken.  On the one hand,
there are some kinds of randomness that are of no use to the free
will theorist: the kind of randomness suggested by quantum physics,
for example, does not give the free will theorist what he wants.
One can believe in quantum indeterminism without believing in
free will.  On the other hand, the term "choice" is ambiguous between
(a) the ACT OF CHOOSING and (b) THAT WHICH IS CHOSEN (in this case,
let's say the behavior that results from the choosing).  It's not
clear which of these Swinney means.  I think that what the free will
theorist (at least some free will theorists, at any rate) would say
is that the CHOOSING is not determined (in the sense of being the
inevitible result of a previous state of affairs governed by a
universal law), but the resulting behavior IS, in a sense, determined:
it is determined by the act of choosing and the states of the
organism and its environment that allow what is chosen to be carried
out.  (There is a fairly large philosophical corpus on the subject
of "agent causation".)
    What the advocate of free will (we'll exclude compatibilists for
the moment) must not say is that choices freely made can receive
an adequate explanation in terms of natural laws and states of
affairs prior to the free act.  So Swinney is right that
(non-compatibilist) free will theorists are not determinists.  But
randomness just doesn't capture what the free will theorist is after.
And I think the reason is something like this: human actions can be
looked at from an "external" perspective, just like any other events
in the world.  As such, they either fall under laws covering causal
regularities or they do not, and so from this perspective they are
either determined or random.  But unlike other events in nature,
the actions (and mental states) of thinking beings can also be
understood from an "internal" or "first-person" perspective.  It is
only by understanding this perspective that the notion of FREEDOM
becomes intelligible.  Moreover, it is not clear that the two
perspectives are commensurable - so it isn't really clear that one
one can even ask coherent questions about freedom and determinism.
At any rate, the notions of "freedom" and "bondage" of the will
are not reducible to indeterminism and determinism.

(2) John Logan (logajan@ns.uucp) writes that
> Unproveable theories aren't very useful.
   and that
> Unproveable theories are rather special in that they usually only
> occur to philosophers.

     If we were talking about logic or mathematics, Logan's assertions
might be correct, though even there some of the most interesting
"theories" are not known to be proveable.  But in the sciences, NO
interesting theories are proveable, as Karl Popper argued so
persuasively (and frequently and loudly) for many years.  The nature
of the warrant for scientific theories is a complicated thing.
(For those interested, I would recommend Newton-Smith's book on
the subject, which as I recall is entitled "Rationality in Science".)
Perhaps Logan did not mean to conjure visions of the logical
positivists when he used the word "proveable", in which case I
apologize for conjuring Popper in return.  But the word "proof"
really does bring to mind a false, if popular, picture of the nature
of scientific research.

Steven Horst
   BITNET.......gkmarh@irishmvs
   SURFACE......Department of Philosophy
                Notre Dame, IN  46556

dhw@itivax.UUCP (David H. West) (08/02/88)

To: comp-ai-digest@uunet.UU.NET
Path: umich!itivax!dhw
From: David H. West <umix!umich!eecs.umich.edu!itivax!dhw@uunet.UU.NET>
Newsgroups: comp.ai.digest
Subject: Re: free will
Date: Thu, 28 Jul 88 14:20 EDT
References: <19880727030413.0.NICK@HOWARD-JOHNSONS.LCS.MIT.EDU>
Organization: Institute for Defense Astrology
Lines: 37


In a previous article, John McCarthy writes:
> Let me
> suggest that AI people concentrate their attention on the question of how
> a deterministic robot should be programmed to reason about its own free
> will, as this free will relates both to its past choices and to its future
> choices.  Can we program it to do better in the future than it did in the
> past by reasoning that it could have done something different from what it
> did, and this would have had a better outcome?  If yes, how should it be
> programmed?  If no, then doesn't this make robots permanently inferior to
> humans in learning from experience?

At time t0, the robot has available a partial (fallible) account of:
the world-state, its own possible actions, the predicted
effects of these actions, and the utility of these
effects.  Suppose it wants to choose the action with maximum
estimated utility, and further suppose that it can and does do this.
Then its decision is determined.  Free-will (whatever that is)
is merely the freedom to do something that doesn't maximize its
utility, which is ex hypothesi not a freedom worth exercising.

At a later time t1, the robot has available all of the above, plus
the outcome of its action.  It is therefore not in the same state as
previously. It would make no sense to ignore the additional
information.  If the outcome was as expected, then there is no
reason to make a different choice next time unless some other
element of the situation changes.  If the outcome was not as
predicted, the robot needs to update its models.  This updating is
another choice-of-action-under-incomplete-information problem, so
again the robot can only maximize its own fallibly-estimated
utility, and again its behavior is determined, not (just) by its
physical structure, but by the meta-goal of acting coherently.

If the robot thought about its situation, it would presumably
conclude that it felt no impediment to doing what was obviously the
correct thing to do, and that it therefore had free will.

-David West        dhw%iti@umix.cc.umich.edu