[comp.ai.digest] AIList V6 #86 - Philosophy

MINSKY@AI.AI.MIT.EDU (Marvin Minsky) (05/01/88)

Yamauchi, Cockton, and others on AILIST have been discussing freedom
of will as though no AI researchers have discussed it seriously.  May
I ask you to read pages 30.2, 30.6 and 30.7 of The Society of Mind.  I
claim to have a good explanation of the free-will phenomenon.  I agree
with Gilbert Cockton that it is not the lack of answers that should be
criticised, but the contemporary ignorance of the subject.  (As for
why my own answer evaded philosophers for millenia, My hypothesis is
that philosophers have not been very insightful about actual
psychological phenomena - which is why it had to wait for Freud - or,
perhaps, Poincare - to produce convincing discussions about the
importance of unconscious thinking.)

Cockton also sagely points out that
 a rule-based or other mechanical account of cognition and decision
 making is at odds with the doctrine of free will which underpins most
 Western morality. ...  Scientists who seek moral, ethical,
 epistemological or methodological vacuums are only marginalising
 themselves into positions where social forces will rightly constrain
 their work.

I only disagree with Cockton's insertion of "rightly".  Like
E.O.Wilson, I prefer follow ideas even where they lead to potentially
unpopular conclusions.  Indeed, I feel it is only proper for those
social forces to try to constrain my work.  When the researchers feel
constrained to censor their own work, then everyone may end up the
poorer in the end.

I'm not even sure this is a disagreement.  A closer look might show that
this is what Cockton is actually saying, too.

NICK@AI.AI.MIT.EDU (Nick Papadakis) (05/27/88)

Date: Mon, 16 May 88 14:26 EDT
From: Barry W. Kort <bwk@mitre-bedford.arpa>
Organization: Moribund Corporation, Seventh Chapter, DE
Subject: Re: AIList V6 #86 - Philosophy
References: <3200016@uiucdcsm>, <523@wsccs.UUCP>, <11191@sunybcs.UUCP>
Sender: ailist-request@ai.ai.mit.edu
To: ailist@ai.ai.mit.edu

David Sher has injected some new grist into the discussion of
"responsibility" for machines and intelligent systems.

I tend to delegate responsibility to machines known as "feedback
control systems".  I entrust them to maintain the temperature of
my house, oven, and hot water.  I entrust them to maintain my
highway speed (cruise control).  When these systems malfunction,
things can go awry in a big way.  I think we would have no trouble
saying that such feedback control systems "fail", and their failure
is the cause of undesirable consequences.

The only interesting issue is our reaction.  I say fix them (or
improve their reliability) and get on with it.  Blame and punishment
are pointless.  If a system is unable to respond, doesn't it make
more sense to restore its ability than to merely label it "irresponsible"?

--Barry Kort

NICK@AI.AI.MIT.EDU (Nick Papadakis) (06/02/88)

Date: Sun, 15 May 88 23:59 EDT
From: Richard A. O'Keefe <quintus!ok@sun.com>
Organization: Quintus Computer Systems, Mountain View, CA
Subject: Re: AIList V6 #86 - Philosophy
References: <1579@pt.cs.cmu.edu>, <3200016@uiucdcsm>, <523@wsccs.UUCP>
Sender: ailist-request@ai.ai.mit.edu
To: ailist@ai.ai.mit.edu

In article <523@wsccs.UUCP>, dharvey@wsccs.UUCP (David Harvey) writes:
> lives.  Even a casual perusal of the studies of identical twins
> separated at birth will produce an uncanny amount of similarities, and
> this also includes IQ levels, even when the social environments are
> radically different.

ONLY a casual perusal of the studies of separated twins will have this
effect.  There is a selection effect:  only those twins are studied who
are sufficiently far from separation to be located!  A lot of these
so-called "separated" twins have lived in the same towns, gone to the
same schools, ...