[comp.ai] Raising Consciousness

dvm@yale.UUCP (Drew Mcdermott) (05/13/88)

I would like to suggest a more constrained direction for the discussion
about free will.  In response to my proposal, Harry Plantinga wrote:

   As an argument that people don't have free will in the common sense,
   this would only be convincing to ... someone who already thinks people 
   don't have free will.

I believe most of the confusion about this concept comes from there not
being any agreed-upon "common sense" of the term "free will."  To the
extent that there is a common consensus, it is probably in favor of
dualism, the belief that the absolute sway of physical law
stops at the cranium.  Unfortunately, ever since the seventeenth century,
the suspicion has been growing among the well informed that this kind of
dualism is impossible.  And that's where the free-will problem comes
from; we seem to make decisions, but how is that possible in a world
completely describable by physics?

If we want to debate about AI versus dualism (or, to be generous to
Mr. Cockton et al., AI versus something-else-ism), we can.  I don't view
the question as at all settled.  However, for the purposes of this 
discussion we ought to pretend it is settled, and avoid getting
bogged down in a general debate about whether AI is possible at
all.  Let's assume it is, and ask what place free will would have
in the resulting world view.  This attitude will inevitably require
that we propose technical definitions of free will, or propose dispensing
with the concept altogether.  Such definitions must do violence to
the common meaning of the term, if only because they will lack the
vagueness of the common meaning.  But science has always operated this
way.  

I count four proposals on the table so far:

1. (Propose by various people) Free will has something to do with randomness.

2. (McCarthy and Hayes) When one says "Agent X can do action A," or 
"X could have done A," one is implicitly picturing a situation in which X 
is replaced by an agent X' that can perform the same behaviors as X, but 
reacts to its inputs differently.  Then "X can do A" means "There is an X' 
that would do A." It is not clear what free will comes to in this theory.

3. (McDermott) To say a system has free will is to say that it is
"reflexively extracausal," that is, that it is sophisticated enough
to think about its physical realization, and hence (to avoid inefficacy)
that it must realize that this physical realization is exempt from
causal modeling.

4. (Minsky et al.) There is no such thing as free will.  We can dispense
with the concept, but for various emotional reasons we would rather not.

I will defend my theory at greater length some other time.  Let me confine
myself here to attacking the alternatives.  The randomness theory has
the problem that it presents a necessary, but presumably not sufficient,
condition for a system to have free will.  It is all very well to say
that a coin "chose to come up heads," but I would prefer a theory that
would actually distinguish between systems that make decisions and those
that don't.  This is not (prima facie) a mystical distinction; a stock-index
arbitrage program decides to buy or sell, at least at first blush, whereas
there is no temptation to say a coin decides anything.  The people in the
randomness camp owe us an account of this distinction.

I don't disagree with McCarthy and Hayes's idea, except that I am not
sure exactly whether they want to retain the notion of free will.

Position (4) is to dispense with the idea of free will altogether.  I
am half in favor of this.  I certainly think we can dispense with the
notion of "will"; having "free will" is not having a will that is free,
as opposed to brutes who have a will that is not free.  But it seems
that it is incoherent to argue that we *should* dispense with the idea
of free will completely, because that would mean that we shouldn't use
words like "should."  Our whole problem is to preserve the legitimacy
of our usual decision-making vocabulary, which (I will bet any amount)
everyone will go on using no matter what we decide.

Furthermore, Minsky's idea of a defense mechanism to avoid facing the
consequences of physics seems quite odd.  Most people have no need for
this defense mechanism, because they don't understand physics in the 
first place.  Dualism is the obvious theory for most people.  Among 
the handful who appreciate the horror of the position physics has put
us in, there are plenty of people who seem to do fine without the
defense mechanism (including Minsky himself), and they go right on
talking as if they made decisions.  Are we to believe that sufficient
psychotherapy would cure them of this?  

To summarize, I would like to see discussion confined to technical
proposals regarding these concepts, and what the consequences of adopting
one of them would be for morality.  Of course, what I'll actually see
is more meta-discussion about whether this suggestion is reasonable.

By the way, I would like to second the endorsement of Dennett's book 
about free will, "Elbow Room," which others have recommended.  I thank 
Mr. Rapoport for the reading list.  I'll return the favor with a reference 
I got from Dennett's book:

D.M. Mackay 1960 On the logical indeterminacy of a free choice.  {\it Mind
\bf 69}, pp. 31--40

Mackay points out that someone could predict my behavior, but that
  (a) It would be misleading to say I was "ignorant of the truth" about
      the prediction, because I couldn't be told the truth without
      changing it.
  (b) Any prediction would be conditional on the predictor's decision
      not to tell me about it.  

                                    -- Drew McDermott

bwk@mitre-bedford.ARPA (Barry W. Kort) (05/14/88)

I was heartened by Drew McDermot's well-written summary of the Free
Will discussion.

I have not yet been dissuaded from the notion that Free Will is
an emergent property of a decision system with three agents.

The first agent generates a candidate list of possible courses
of action open for consideration.  The second agent evaluates
the likely outcome of pursuing each possible course of action,
and estimates it's utility according to it's value system.  The
third agent provides a coin-toss to resolve ties.

Feedback from the real world enables the system to improve its
powers of prediction and to edit it's value system.

If the above model is at all on target, the decision system would
seem to have free will.  And it would not be unreasonable to hold
it accountable for its actions.

On another note, I think it was Professor Minsky who wondered
how we stop deciding an issue.  My own feeling is that we terminate
the decision-making process when a more urgent or interesting
issue pops up.  The main thing is that our decision making machinery
chews on whatever dilemma captures its attention.

--Barry Kort

sher@sunybcs.uucp (David Sher) (05/16/88)

There is perhaps a minor bug in Drew Mcdermott's (who teaches a great
grad level ai class) analysis of free will.  If I understand it
correctly it runs like this:
To plan one has a world model including future events.
Since you are an element of the world then you must be in the model.
Since the model is a model of future events then your future actions
are in the model.
This renders planning unnecessary.
Thus your own actions must be excised from the model for planning to
avoid this "singularity."

Taken naively, this analysis would prohibit multilevel analyses such
as is common in game theory.  A chess player could not say things like
if he moves a6 then I will move Nc4 or Bd5 which will lead ....
Thus it is clear that to make complex plans we actually need to model
ourselves (actually it is not clear but I think it can be made clear
with sufficient thought).  

However we can still make the argument that Drew was making its just
more subtle than the naive analysis indicates.  The way the argument 
runs is this:
Our world model is by its very nature a simplification of the real
world (the real world doesn't fit in our heads).  Thus our world model
makes imperfect predictions about the future and about consequence.  
Our self model inside our world model shares in this imperfection. 
Thus our self model makes inaccurate predictions about our reactions
to events.  We perceive ourselves as having free will when our self
model makes a wrong prediction.  

A good example of this is the way I react during a chess game.  I
generally develop a plan of 2-5 moves in advance.  However sometimes
when I make a move and my opponent responds as expected I notice a
pattern that previously eluded me.  This pattern allows me to make a
move that was not in my plans at all but would lead to greater gains
than I had planned.  For example noticing a knight fork.  When this
happens I have an intense feeling of free will.  

As another example I had planned on writing a short 5 line note
describing this position.  In fact this article is running several
pages.  ...

-David Sher
ARPA: sher@cs.buffalo.edu	BITNET: sher@sunybcs
UUCP: {rutgers,ames,boulder,decvax}!sunybcs!sher

spector@cvl.umd.edu (Lee Spector) (05/19/88)

Jean-Paul Sartre, in THE TRANSCENDENCE OF THE EGO, discusses the freedom
of the individual and its relation to "unreflected consciousness," which
is necessarily irreflexive and alleged to be necessarily present in any
consciousness.  I will not attempt to summarize Sartre's arguments here, for
I would surely do an inadequate job (though perhaps I will work on this
and post again later).  I recommend the book to all who have been provoked
by Drew McDermott's comments on "reflexive extracausality" and freedom; it
is short, bold, and a relatively easy read for continental philosophy.
Indeed, several years ago I re-read the book and was lead to formulate
the following loose AI "law":  

 An intelligent organism must have a blind spot for itself.

I'm not entirely sure what this means or how one would argue for it,
but perhaps there is some significance lurking here.
Sartre provides a philosophical grounding for
such ideas, while computability theory may provide a mathematical 
basis (though I'm not sure exactly how). 
McDermott has rephrased the contention in the language of AI systems.


    - Lee Spector
      Computer Science Department
      University of Maryland, College Park
      (spector@cvl.umd.edu)

"You will not find the limits of the soul by going, even if you travel over
every way, so deep is its report."  Heraclitus  (approx. 500 BC)