[net.religion] Torek plays Humpty Dumpty with free will

rlr@pyuxd.UUCP (Pesmard Flurrmn) (01/28/85)

"When I say a word..."

> From: rlr@pyuxd.UUCP (Rich Rosen)
> > Sorry, Paul, as I've discussed with you before, rational evaluation is not
> > equivalent to free will.  Unless, of course, you shirk the meaning of the 
> > term and simply label rational evaluation as "free will" because you feel 
> > like it.  

> Sorry, Rich, an analysis of the meaning of "free will" shows that it IS 
> equivalent to the capacity for rational evaluation and action.  That this
> is not immediately obvious does not make it false.  Agency (having free
> will) consists in being able to choose among alternatives -- which raises
> the question how one chooses, and the answer is by evaluating alternatives.
> This in turn involves the use of reason, of having a conception of a norm
> and being disposed to adopt a consistent, best justified set of norms.
> Furthermore, this analysis can be strengthened by an analogy to freedom in
> thought (especially inference), which also amounts to rational evaluation
> (in this case, evaluation of deductive arguments).

Again, you assume that because you use the rational path (e.g., rather than
the biochemically instinctive path), employing what you call rational
evaluation, you are engaging in acts of "free will".  But you are no freer
to "choose" a rational path than you are to "choose" a biochemically
"instinctive" path:  whatever path of "reasoning" (or non-reasoning) that is
taken is based on your chemical makeup.  You make a distinction between them
(and there IS a difference in the methods AND [sometimes] the results), but
they are functionally equivalent.  They have the same causes.  One is simply
better than the other (but not always so):  they are BOTH chemical methods
that produce (hopefully) optimum survival results.  You choose to make
a black-and-white distinction.  It is more of a continuous spectrum.  Some
organisms have minimal (even biochemical) means of making decisions.  Some have
more.  Supposedly, we have the most advanced decision making mechanism.  But
that's not the same as free will just because you say it is.

> Apparently you and Sargent are suffering from attachment to a paradigm (in
> the ordinary sense, not T.S. Kuhn's) of free will as some mysterious "ghost
> in the machine" with the ability to make decisions ex nihilo.  Your fallacy
> lies is not seeing that something can be very different from a paradigm
> case and still be an instance of the general concept.  

But the very notion of free will implies freedom to choose a decision path
regardless of one's surroundings, one's chemical make-up, etc.  Which, of
necessity REQUIRES the "ghost in the machine", the external agent.  It is
not a part of the definition, it is a consequence of it.  (Jeff may be
suffering, but I'm not, thank you. :-)

>>Thus, whether or not the evidence supports it, thus, whether or not the 
>>sole basis for believing in it (NOT rational evaluative capabilities, BUT
>>real live free will [mistake #1]) is wishful [mistake #2] thinking that 
>>the cause of one's thoughts is anything OTHER THAN [mistake #1 again] the 
>>result of chemical processes, since believing in it is subjectively [mis-
>>take #2 again] "better", believe in it!  [EMPHASIS added.]

> Mistake #1 -- see above.  

I guess my "mistake" here is that I don't equivalence rational evaluative
capabilities with free will.  Your argument has still given me no reason
to do so.  Are you simply using a word or term the way you like because it
suits you.  If your definition of free will is different from mine, e.g.,
the same as "rational evaluative capabilities", fine.  But the common
understood usage of the term implies the circumstances and consequences I
described above.  And, unless you're Humpty Dumpty, you can't just change
around the meaning of words at whim.

> Mistake #2 -- believing in free will is 
> OBjectively better because it carries no penalty (of avoidable error)
> if mistaken but does carry a benefit (avoiding avoidable error) if correct.
> It is thus (to use decision theory terminology) a dominant strategy, which
> is about the highest recommendation reason can give.

Didn't someone else already explain this as analogous to Pascal's reasoning for
believing in god?  And didn't that person explain that, objectively and
rationally, one doesn't choose beliefs based on their utility, but rather on
their correctness?  Of course, we've seen that that does not apply to everyone.
(And I'm not talking about Mr. Torek...)

>> ... look up the word "free" 
>> and explain how, when flowing rivers, animal behaviors and human 
>> actions all result from the same process (why think otherwise?), how human 
>> actions are somehow "different"?

> The process is the same only at a very general level of description.  The
> difference is rationality (which animals may have to some degree).

Rationality is a human-made description of a process.  One can just as easily
say that rivers and rocks also behave rationally, though their mechanism for
"decision-making" is less elaborate than our own.  To arbitrarily say,
"This is OUR method; we call it free will (you call it corn :-)." is
erroneous.  It seems ironic, Paul, that you, who castigated me for looking "too
deep to the root cause", now claim that it's invalid to say that things are the
same "at a very general level of description".

>>The basis of the "rational evaluation" is the process that causes it, and
>>whether or not the end result IS strictly rational, [--]

> -- is precisely the issue, so you can stop right there.

Only in the eye of the classifying observer (you).  Continued ...

>>Our observations do not make the universe what it is.  ... the universe is 
>>not the same as our observations of it ...

> Yeah, yeah.  All of which ignores my point that the "outside world" is
> interesting, if at all, only insofar as it relates to us and we can relate
> to it.  Thus the sinlessness of (some variants of) "anthropocentrism".

Sorry, Paul.  Interestingness or noninterestingness have no bearing on
reality.  Except in the mind of certain observers.  Your anthropocentrism
(and that of others) is hardly "sinless".  I didn't ignore your point; I
disagreed with it.

>>I suggest you read Carroll's "Conversation Between the Tortoise and the 
>>Hare"....  Though I agree that reason is simply the means by which one 
>>evaluates the validity of conclusions based on the premises ... the nature
>>of reason as the justification for reason is indeed circular.  Carroll
>>describes a set of premises, and claims that one can accept all of the 
>>premises and yet still reject the conclusion that logically follows ...
>>(A "logical" person might not do any of this rejecting, but what's the 
>>"reason" for a non-logical person to do so?)

> The same logical reason we have.

This is akin to being asked "What reason do we have to believe in god?" and
answering "The same god-given reasons we have".  You cannot justify logic with
logic.  Logic is simply a means of reaching a conclusion based on analysis of
premises.  You cannot prove logic.  But you can show that certain sets of
premises, as accepted by certain groups of people, are either clearly
erroneous (unjustified wishful thinking) or contradictory.  You can only prove
to them that they are wrong if 1) they accept the foundations of logic,
and 2) they accept the possibility that their conclusions might be wrong,
or erroneous, or based on faulty premises.  

>  Carroll's "tactic" of adding additional
> premises (which state logical laws) shows *not* that reason justifying
> reason is circular, but that reason is not a premise but rather the way
> of getting from premises to conclusions.  (Carroll's mental exercise
> shows that the function of reason is *not* performed by treating it as a
> premise.)  Precisely because reason is not a premise, its use is *not*
> circular!

This assumes the validity of reason.  You cannot seem to look at a
possibility of a system of belief outside of reason and logic.  If reason
is not a premise, then you're saying that it's a given, which means (in fact)
that it's a premise!  You are imputing quite a lot into Carroll's intentions
there:  I would think (knowing what I've read of Carroll) that he was simply
trying to show a contradiction or paradox in the use of logic (something he
reveled in), not trying to show that use of reason was not circular.  (If
anything, he was trying to show that it WAS, devil that he was...)

>>The worth (or lack of worth) of scientific inquiry is not the issue; that
>>worthfulness/worthlessness would be determined by an application of the 
>>results obtained.  

> And how do you think scientists decide to do this or that research; do you
> think they just go about investigating any random fact (like what the 
> average number of hairs on species xyz is, to three decimal places)?  Do
> they not presuppose that their inquiry has at least a good chance of being
> worthwhile?  Would they do it otherwise?

Their motivations for researching particular things have little to do with
the validity of the results.  Doing a worthwhile (by whose perception) based
on erroneous findings yields erroneous results.
 
>>| Can there be -- as I think Rosen wants to suggest -- an "absolute right/
>>| wrong" in science without implying a similar cognitivity for ethics?
> 						 ^^^^^^^^^^^
>>[Paul] clearly hasn't seen anything I've written on the subject of good 
>>versus evil, and the fallacy of absolute good/evil in a world where good
>>and evil are simply defined as what's beneficial/harmful to the person 
>>saying the word(s). ... "Scientific" right/wrong simply consists of that
>>which is true as opposed to that which isn't.  "Moral" right and wrong 
>>are clouded by the issue of who is determining the rightness and
>>wrongness and on what basis.

> What world is that?  That's not how I would define good/evil (how about
> as what's beneficial/harmful to *anyone*?).  How are "moral" issues 
> "clouded" in a way that "scientific" (a completely separate, non-
> overlapping realm?) ones aren't?  Take a look at a lot of scientific
> controversies and tell me that there is no "issue of who is determining
> [correctness] and [incorrectness] and on what basis"!  "But," you're
> retorting, "there's a true/false (i.e., *cognitivity*) to science which
> *has no counterpart* in ethics."  Oh really?  Care to prove that there's
> such a *difference*?

I thought I had, if not proven it, shown facets of it.  What about rain in
a certain region (good for some farmers who've gone dry, bad for others for
whom it would cause a river overflow)?  Sorry, but good and evil are always
in the eye of the beholder, and to think otherwise is nothing more than
wishful thinking.  However, there is a notion of a "common good", that which
is "good" to a whole community.  But, still, ALL these notions of good and
evil have no bearing on factual, objective rightness/wrongness of the way
the universe physically is.
-- 
"Right now it's only a notion, but I'm hoping to turn it into an idea, and if
 I get enough money I can make it into a concept."       Rich Rosen pyuxd!rlr