[comp.ai] AIList V6 #86 - Philosophy

yamauchi@speech2.cs.cmu.edu (Brian Yamauchi) (05/03/88)

In article <368693.880430.MINSKY@AI.AI.MIT.EDU>, MINSKY@AI.AI.MIT.EDU (Marvin Minsky) writes:
> Yamauchi, Cockton, and others on AILIST have been discussing freedom
> of will as though no AI researchers have discussed it seriously.  May
> I ask you to read pages 30.2, 30.6 and 30.7 of The Society of Mind.  I
> claim to have a good explanation of the free-will phenomenon.

Actually, I have read The Society of Mind, where Minsky writes:

| Everything that happens in our universe is either completely determined
| by what's already happened in the past or else depends, in part, on
| random chance.  Everything, including that which happens in our brains,
| depends on these and only on these :
|
| A set of fixed, deterministic laws.	A purely random set of accidents.
|
| There is no room on either side for any third alternative.

I would agree with this.  In fact, unless one believes in some form of
supernatural forces, this seems like the only rational alternative.

My point is that it is reasonable to define free will, not as some mystical
third alternative, but as the decision making process that results from
the interaction of an individual's values, memories, emotional state, and
sensory input.

As to whether this is "free" or not, it depends on your definition of
freedom.  If freedom requires some force independent of genetics,
experience, and chance, then I suppose this is not free.  If freedom
consists of allowing an individual to make his own decisions without
coercion from others, then this definition is just as compatible with
freedom as any other.

If I am interpreting Minsky's book correctly, I think we agree that it is
possible (in the long run) for AIs to have for the same level of decision
making ability / self-awareness as humans.  The only difference is that he
would say that this means that neither humans nor AIs have free will, while
I would say that (using the above definition) that this means that humans do
have free will and AIs have the potential for having free will.

On the other hand, Cockton writes:

>The main objection to AI is when it claims to approach our humanity.
>
>			It cannot.

Cockton seems to be saying that humans do have free will, but is totally
impossible for AIs to ever have free will.  I am curious as to what he bases
this belief upon other than "conflict with traditional Western values".
______________________________________________________________________________

Brian Yamauchi                      INTERNET:    yamauchi@speech2.cs.cmu.edu
Carnegie-Mellon University
Computer Science Department
______________________________________________________________________________

channic@uiucdcsm.cs.uiuc.edu (05/03/88)

In his article Brian Yamuchi (yamauchi@speech2.cs.cmu.edu) writes:
> /* ---------- "Re: AIList V6 #86 - Philosophy" ---------- */
> In article <368693.880430.MINSKY@AI.AI.MIT.EDU>, MINSKY@AI.AI.MIT.EDU (Marvin Minsky) writes:
> > Yamauchi, Cockton, and others on AILIST have been discussing freedom
> > of will as though no AI researchers have discussed it seriously.  May
> > I ask you to read pages 30.2, 30.6 and 30.7 of The Society of Mind.  I
> > claim to have a good explanation of the free-will phenomenon.
> 
> Actually, I have read The Society of Mind, where Minsky writes:
> 
> | Everything that happens in our universe is either completely determined
> | by what's already happened in the past or else depends, in part, on
> | random chance.  Everything, including that which happens in our brains,
> | depends on these and only on these :
> |
> | A set of fixed, deterministic laws.	A purely random set of accidents.
> |
> | There is no room on either side for any third alternative.
> 
I see plenty of room -- my own subjective experience.  I make mental
decisions which are not random and are not completely determined (although
certainly influenced) by past determinism.  Minsky wondered why his
explanation seems to have eluded philosophers of the past.  I am not
surprised because evidently he is just being swept away out of control
in an entirely random or totally determined universe.  The philosophers
of the past, like myself, were probably like me -- intelligent, free will
sentient beings.  Are these philosphers and myself in the minority?
I think not, but I am surprised that such views constitute RESPECTED,
let alone, PUBLISHED material on the subject.  Certainly this objectivist
viewpoint helps the discipline of AI:  people (i.e. funding agences) will
be more likely to believe that a machine can be intelligent if intelligence
can be reduced to a set of purely deterministic laws.  But this BEGS THE
QUESTION of intelligent machines in the worst way.  Show me the deterministic
laws that create mind, Dr. Minsky, then I will believe there is no free will.
Otherwise, you are trying to refute an undeniable human experience.
Do you believe your career was merely the result of some bizarre genetic
combination or pure chance?

The attack is over.  The following is a plea to all AI researchers.  Please
do not try to persuade anyone, especially impressionable students, that s\he
does not have free will.  Everyone has the ability to choose to bring peace
to his or her own life and to the rest of society, and has the ability to
MAKE A DIFFERENCE in the world.  Free will should not be compromised for the
mere prospect of creating an intelligent machine.

Tom Channic
University of Illinois
channic@uiucdcs.cs.uiuc.edu
{decvax|ihnp4}!pur-ee!uiucdcs!channic

shani@TAURUS.BITNET (05/04/88)

In article <1579@pt.cs.cmu.edu>, yamauchi@speech2.cs.cmu.edu.BITNET writes:
> Actually, I have read The Society of Mind, where Minsky writes:
>[A quote of Minsky]
> I would agree with this.  In fact, unless one believes in some form of
> supernatural forces, this seems like the only rational alternative.

   You are touching the very core of the problem. The point in which, this
'only random and determination exist' is getting into problems is the
question of responseability i.e., if everything is pre-determened or random,
how can you asume responsebility to what you are doing? and if responsability
does not exist, the whole matter of free will and value system has no content,
so, if free will and value system can be given to a machine, it is meaningless,
and if it has meaning, it is depended on a third, irational factor (Free will),
which cannot (Menwhile?) be given to a machine...

O.S.

bettingr@sunybcs.uucp (Keith E. Bettinger) (05/06/88)

In article <3200016@uiucdcsm} channic@uiucdcsm.cs.uiuc.edu writes:
}In his article Brian Yamuchi (yamauchi@speech2.cs.cmu.edu) writes:
}} /* ---------- "Re: AIList V6 #86 - Philosophy" ---------- */
}} In article <368693.880430.MINSKY@AI.AI.MIT.EDU}, MINSKY@AI.AI.MIT.EDU (Marvin Minsky) writes:
}} } Yamauchi, Cockton, and others on AILIST have been discussing freedom
}} } of will as though no AI researchers have discussed it seriously.  May
}} } I ask you to read pages 30.2, 30.6 and 30.7 of The Society of Mind.  I
}} } claim to have a good explanation of the free-will phenomenon.
}} 
}} Actually, I have read The Society of Mind, where Minsky writes:
}} 
}} | Everything that happens in our universe is either completely determined
}} | by what's already happened in the past or else depends, in part, on
}} | random chance.  Everything, including that which happens in our brains,
}} | depends on these and only on these :
}} |
}} | A set of fixed, deterministic laws.   A purely random set of accidents.
}} |
}} | There is no room on either side for any third alternative.
}} 
}I see plenty of room -- my own subjective experience.  I make mental
}decisions which are not random and are not completely determined (although
}certainly influenced) by past determinism.

How do you know that?  Do you think that your mind is powerful enough to
comprehend the immense combination of effects of determinism and chance?
No one's is.

} [...] But this BEGS THE
}QUESTION of intelligent machines in the worst way.  Show me the deterministic
}laws that create mind, Dr. Minsky, then I will believe there is no free will.
}Otherwise, you are trying to refute an undeniable human experience.

No one denies that we humans experience free will.  But that experience says
nothing about its nature; at least, nothing ruling out determinism and chance.

}Do you believe your career was merely the result of some bizarre genetic
}combination or pure chance?
             ^^
The answer can be "yes" here, if the conjunction is changed to "and".

}
}The attack is over.  The following is a plea to all AI researchers.  Please
}do not try to persuade anyone, especially impressionable students, that s\he
}does not have free will.  Everyone has the ability to choose to bring peace
}to his or her own life and to the rest of society, and has the ability to
}MAKE A DIFFERENCE in the world.  Free will should not be compromised for the
}mere prospect of creating an intelligent machine.

Believe it or not, Minsky makes a similar plea in his discussion of free will
in _The Society of Mind_.  He says that we may not be able to figure out where
free will comes from, but it is so deeply ingrained in us that we cannot deny
it or ignore it.

}
}Tom Channic
}University of Illinois
}channic@uiucdcs.cs.uiuc.edu
}{decvax|ihnp4}!pur-ee!uiucdcs!channic

-------------------------------------------------------------------------
Keith E. Bettinger                  "Perhaps this final act was meant
SUNY at Buffalo Computer Science     To clinch a lifetime's argument
                                     That nothing comes from violence
CSNET:    bettingr@Buffalo.CSNET     And nothing ever could..."
BITNET:   bettingr@sunybcs.BITNET               - Sting, "Fragile"
INTERNET: bettingr@cs.buffalo.edu
UUCP:     ..{bbncca,decvax,dual,rocksvax,watmath,sbcs}!sunybcs!bettingr
-------------------------------------------------------------------------

kqb@ho4cad.ATT.COM (05/10/88)

In article <721@taurus.BITNET>, shani@TAURUS.BITNET writes:
> ... if everything is pre-determined or random,
> how can you asume responsibility to what you are doing? and if responsibility
> does not exist, the whole matter of free will and value system has no content,
> ...

A few people have expressed difficulty with the notion of responsibility when
they discover that a person's actions are a combination of (1) deterministic
processes and (2) random processes.   I think that the difficulty arises from
a confused notion of responsibility (and "free will").  In this message I aim
to remove (some of) the confusion from the concept of responsibility.

Responsibility can be claimed ("I stand for X and will be responsible for X.")
or assigned ("You are Joey's parent and are responsible for Joey's actions.").
  Responsibility does not exist independent of a claim or assignment of
  responsibility; it is a personal or social contract rather than an
  intrinsic property of a person.
We may question whether or not an assignment or claim of responsibility is
appropriate (useful), but it is a waste of time to ask whether or not a person
IS responsible (due to "free will", etc.).

A claim or assignment of responsibility is most useful when the responsible
person has the power to uphold that responsibility.  It is not useful to
assign responsibility for the war between Iran and Iraq to a newborn baby in
Cincinnati because the baby has no power over that situation.  On the other
hand, it may be useful for an adult to claim responsibility for world hunger
(even though he or she did not personally cause it), because there is a lot
that a resourceful and committed adult can do about that.

Here is why the notion of responsibility_as_a_contract works:
Assigning responsibility to the appropriate person is useful because it changes
the environment of the person who is assigned responsible.  Since people are
capable of understanding the consequences of their actions, an assignment of
responsibility can be a useful way to modify their behavior.
Claiming responsibility can be a useful way to clarify, prioritize, and
communicate your values.  (If you are ever concerned about whether or not your
life means anything, try taking a stand on something that you care about.)
                                                  - Kevin Q. Brown
                                                  ...ihnp4!ho4cad!kqb
< Standard disclaimer >

dharvey@wsccs.UUCP (05/12/88)

In article <3200016@uiucdcsm>, channic@uiucdcsm.cs.uiuc.edu writes:
> 
> In his article Brian Yamuchi (yamauchi@speech2.cs.cmu.edu) writes:
> Do you believe your career was merely the result of some bizarre genetic
> combination or pure chance?
> 
People like you need to watch "Being There" at least 10 times.  The fact
that I was born to a lower class family shouldn't have any effect on my
career choice vs. the ones made by young Ron Reagan should it?  And I
can imagine that the poor starving Ethiopians have just as much a chance
of becoming a Computer Scientist as I do.  Chance has much more of an
impact than many want to admit in determining what we do.  I can also
imagine the great fame and glory that I will achieve for a great
scientific discovery since it will happen just because I will it!  Never
mind the fact that my IQ is not even close to Albert Einstein's!  Also,
genetic structure has a very significant impact on how we live our
lives.  Even a casual perusal of the studies of identical twins
separated at birth will produce an uncanny amount of similarities, and
this also includes IQ levels, even when the social environments are
radically different.  You dismiss these factors as if they are
insignificant and trivial. 
>
> The attack is over.  The following is a plea to all AI researchers.  Please
> do not try to persuade anyone, especially impressionable students, that s\he
> does not have free will.  Everyone has the ability to choose to bring peace
> to his or her own life and to the rest of society, and has the ability to
> MAKE A DIFFERENCE in the world.  Free will should not be compromised for the
> mere prospect of creating an intelligent machine.
> 
I am student (perhaps more depressable than impressable) and haven't
noticed anyone persuading me in any way.  A lot have tried to convince
me that I have free will, but for some reason I always get lost in the
quagmire of linguistic semantics which makes the term almost impossible
to define clearly.  You must understand that I have read much of the
works of modern philosophers (Descartes, Spinoza, Leibniz, Berkeley,
Hume, and Kant among them) and the whole issue remains unresolved for
me.  I tend to lean toward the AI perspective, but....

	The only thing you can know for sure is

	   That you can't know anything for sure!  (-:

					dharvey @ wsccs

	Nobody represents me, and I represent Nobody.

sher@sunybcs.UUCP (05/13/88)

It seems that free will vs determinism is being nicely beaten to
death.  To abstract out the part of the discussion that I personally
find interesting: those who object to determinism and those who object
to any deterministic form of intelligence seem largely to object to
the fact that deterministic entities can not carry "responsibility".
Having intelligent yet irresponsible entities is indeed a scary
thought.  But I still am hazy about what "responsibility" might be.

Well first the dictionary ("The Oxford Minidictionary"):
    responsible:
1.	obliged to take care of something or to carry out a duty.
2.	liable to be blamed for loss or failure etc;
3.	having to account for one's actions
4.	capable of rational conduct
5.	trustworthy
6.	involving important duties
7.	being the cause of something.
An unusually long definition for a pocket dictionary probably
indicating an unusually difficult word.   

So now we can consider which of these definitions can be applied to a
machine, intelligent or not.
Definitions 1,4,6, and 7 can be applied to any machine.  3 can be
applied to software with a debug mode so that actions can be explained
as a result of input and state.  Thus 3 can be applied to axiomatic
systems but perhaps not so much to connectionist systems.  However 3
is applied to human beings (the ultimate connectionist system) so
perhaps this is not a valid objection.  I have encounted machines I
trusted and machines I wouldn't trust with a bit so I guess 5 can be
applied to machines.  That leaves definition 2.  

Can a machine be "liable to be blamed for loss or failure etc?"

Currently when a machine fails blame is either accrued to the operator
of the machine if it is improperly operated, or the manufacturer if 
it failed due to an internal flaw or to G-d if it failed in a
completely unacountable way.  We never really blame or punish a car
for hitting someone.  If it hit someone because the driver was drunk
we blame the driver.  If it hit someone because of a failure in the
brake system we blame the manufacturer or G-d depending on the nature
of the failure.  

Would intelligent machines be different in this respect?
If so how?

I am personally interested in this problem since I currently have a
small project (that I hope will grow larger) in medical imaging.  If
this project bears fruit I may be directly confronted with the issue
of responsibility for intelligent machinery.  

One last note: we seem sometimes to apply responsibility to things in
inverse proportion to our understanding of it.  Thus a more complex
and difficult to understand system will tend to carry more
responsibility than a simpler system.  Thus we will place more
responsibility in our cars than in our screwdrivers.  We also place
more responsibility in the economy (the ultimately complex system)
than in our cars.  This may be why people want to assign
responsibility to intelligent(=complex) machinery but not to simpler
machinery.  Is this reasonable behavior?

-David Sher
ARPA: sher@cs.buffalo.edu	BITNET: sher@sunybcs
UUCP: {rutgers,ames,boulder,decvax}!sunybcs!sher

ok@quintus.UUCP (Richard A. O'Keefe) (05/16/88)

In article <523@wsccs.UUCP>, dharvey@wsccs.UUCP (David Harvey) writes:
> lives.  Even a casual perusal of the studies of identical twins
> separated at birth will produce an uncanny amount of similarities, and
> this also includes IQ levels, even when the social environments are
> radically different.

ONLY a casual perusal of the studies of separated twins will have this
effect.  There is a selection effect:  only those twins are studied who
are sufficiently far from separation to be located!  A lot of these
so-called "separated" twins have lived in the same towns, gone to the
same schools, ...

bwk@mitre-bedford.ARPA (Barry W. Kort) (05/17/88)

I support Kevin Brown's suggestion that an agent is free to accept
or claim responsibility for the well being a a corner of the planet.

In return for this forfeiture of innocence, the agent receives
power, authority, prestige, or a life of meaning.  I see nothing
wrong with that.  We are free to construct such bargains.

--Barry Kort

bwk@mitre-bedford.ARPA (Barry W. Kort) (05/17/88)

David Sher has injected some new grist into the discussion of
"responsibility" for machines and intelligent systems.

I tend to delegate responsibility to machines known as "feedback
control systems".  I entrust them to maintain the temperature of
my house, oven, and hot water.  I entrust them to maintain my
highway speed (cruise control).  When these systems malfunction,
things can go awry in a big way.  I think we would have no trouble
saying that such feedback control systems "fail", and their failure
is the cause of undesirable consequences.

The only interesting issue is our reaction.  I say fix them (or
improve their reliability) and get on with it.  Blame and punishment
are pointless.  If a system is unable to respond, doesn't it make
more sense to restore its ability than to merely label it "irresponsible"?

--Barry Kort

harwood@cvl.umd.edu (David Harwood) (05/18/88)

In article <981@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>In article <523@wsccs.UUCP>, dharvey@wsccs.UUCP (David Harvey) writes:
>> lives.  Even a casual perusal of the studies of identical twins
>> separated at birth will produce an uncanny amount of similarities, and
>> this also includes IQ levels, even when the social environments are
>> radically different.
>
>ONLY a casual perusal of the studies of separated twins will have this
>effect.  There is a selection effect:  only those twins are studied who
>are sufficiently far from separation to be located!  A lot of these
>so-called "separated" twins have lived in the same towns, gone to the
>same schools, ...

	Please, Mr. O'Keefe - please stick to programming languages
where we very much appreciate your competence. But you really don't
know what you are talking about concerning methodologies or results of
twin studies. 
	This is not to say that there aren't methodological
problems in any science - but simply that you are pretty obviously
ignorant in this matter, but have a penchant for wise-guy replies.
If you have published criticisms, or even read the ongoing major
U. Minnesota studies, then I will gladly publish an apology. (I surely
have not.) 
	Can you substantially prove that there is not sound research
which shows comparatively significant psychological similarity of
identical twins, even when growing up apart? As for your remark about
circumstantial selectional effects - give us computer researchers
as break - we don't think that all psychologists are more methodologically
incompetent then AI- or Prolog specialists. That is absurd. Moreover,
it is a naive fallacy for you to assume that genetic expression is not
dependent on enviroment. That is, the fact that separated identical
twins might be selected because they are found in the same physical or
cultural "niches" has to be allowed for theoretically alright, but 
the issue is whether twins are comparatively more similar in various 
more or less different enviroments. And apparently they are, the
extent depending on the enviroment.
	By the way, I'm impressed with your Quintus Prolog products,
but do they actually pay you to post incessantly, as some kind of
advertisement or, more likely, public service? Impressive as you
are technically, I am rather sick of this self-aggrandizement. (Of
course, others have vociferously expressed their displeasure with my 
occasional public annoyance.)
	Now I will go to read the rest of the news fit to post ;-)

David Harwood

ok@quintus.UUCP (Richard A. O'Keefe) (05/19/88)

In article <2865@cvl.umd.edu>, harwood@cvl.umd.edu (David Harwood) writes:
> In article <981@cresswell.quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
> >In article <523@wsccs.UUCP>, dharvey@wsccs.UUCP (David Harvey) writes:
> >> lives.  Even a casual perusal of the studies of identical twins
> >ONLY a casual perusal of the studies of separated twins will have this
> 
> 	Please, Mr. O'Keefe - please stick to programming languages
> where we very much appreciate your competence. But you really don't
> know what you are talking about concerning methodologies or results of
> twin studies. 

I am trained in Statistics.  I have not worked in psychology myself, but
I have spent many hours in conversation with statisticians who have, and
I have advised at a computing centre where psychologists brought their
work to be blessed by the computer.  There is no way I can pretend to be
an expert psychology or twin studies, but I am not wholly ignorant of
the literature.

> Moreover,
> it is a naive fallacy for you to assume that genetic expression is not
> dependent on enviroment. That is, the fact that separated identical
> twins might be selected because they are found in the same physical or
> cultural "niches" has to be allowed for theoretically alright, but 
> the issue is whether twins are comparatively more similar in various 
> more or less different enviroments.

Please don't put words in my mouth.  Of course genetic expression is
dependent on environment.  That was my point!  My point is that because
_both_ genotype and environment are strongly influential, and because
so-called separated twins usually have _very_ similar environments, the
results from "separated twin" studies are anything but uncontentious.
I do not mean to claim that such studies are BAD, my point is that a
non-superficial study will show that the results are not clear-cut.