[net.religion] Responses to D. Trissel and R. Herman

esk@wucs.UUCP (Paul V. Torek) (02/11/85)

[Free choice, reason, evaluation, and AI]

From: davet@oakhill.UUCP (Dave Trissel)
> > Agency (having free will) consists in being able to choose among
> > alternatives -- which raises the question how one chooses, and the
> > answer is by evaluating alternatives.  This in turn involves the
> > use of reason, of having a conception of a norm and being disposed
> > to adopt a consistent, best justified set of norms.	[P. Torek]
> ...
> In essence the evaulate module [of Dave's chess program] in conjunction
> with the search tree algorithms has "a conception of a norm and being
> disposed to adopt a consistent, best justified set of norms" is how the
> best move is made. Of course the program can only in primitive ways (at
> the current time) "adopt" or change its "norms."  But such capability
> is of prime interest to the computer chess/AI community. [D. Trissel]

In order for an AI program to "be disposed to adopt a consistent, best
justified set of norms" it would have to be much less primitive, and be
able to evaluate a much wider range of things.  It would have to be able
to evaluate not just "is this a good chess stategy" but also "why should
I sit here playing chess all the time?"  _Elbow Room_ by Dennett has a 
pretty decent explanation of the kind of decisionmaking and self-evalu-
ation capabilities needed for agency ("free will").  Nothing rules out 
AI developing to that point in the (probably distant) future, however.

There is one remaining curiosity:  what good would such evaluative 
capabilities be, if the program had no basis for criticizing its
activities?  What if the program just can't answer "should I sit here
playing chess"?  Knowledge of good and bad is required, and that (I
would contend) requires experiences.  So an Artificial Intelligence
needs experiences before it can have agency-with-a-point.  How could
it have them?  Therein lie some complex philosophical issues.

From: rwh@aesat.UUCP (Russ Herman)
> > One chooses...by evaluating alternatives.	[Paul Torek]
> But discovering the alternatives can be a problem of perception, not of
> reason. E.g., if the room was on fire, I couldn't find a way out without
> my glasses! And I'm far from convinced that core values can be demonstrated
> as being of rational origin.	[Russ Herman]

But reason *includes* learning from experience (i.e. from perception) -- at
least by my understanding of "reason".  And even though you may have trouble
discovering alternatives you can still evaluate the ones you've discovered.
(E.g., "I can stay here and burn or fumble around for the door or for my
glasses; the door's easier to find ... [fumble,fumble]".)  And if reason
includes learning from experience, core values are -- or rather *can be*
-- of rational origin.
				--The aspiring iconoclast,
				Paul V. Torek, ihnp4!wucs!wucec1!pvt1047
Don't hit that 'r' key!  Send any mail to this address, not the sender's.