[net.religion] Torek's wager and its rationality

esk@wucs.UUCP (Paul V. Torek) (02/11/85)

[Rosen and his merry band of naive realists -- engarde!]

From: rlr@pyuxd
> > believing in free will is OBjectively better because it carries no 
> > penalty (of avoidable error) if mistaken but does carry a benefit 
> > (avoiding avoidable error) if correct.  [TOREK]
> Didn't someone else already explain this as analogous to Pascal's reasoning
> for believing in god?  And didn't that person explain that, objectively and
> rationally, one doesn't choose beliefs based on their utility, but rather
> on their correctness?  [RICH ROSEN]

And didn't I explain that that's a false dichotomy?  (How can a belief be 
"correct" if we ought to believe the opposite???)

> > > Our observations do not make the universe what it is.  ... the universe
> > > is not the same as our observations of it ... [ROSEN]
> > Yeah, yeah.  All of which ignores my point that the "outside world" is
> > interesting, if at all, only insofar as it relates to us and we can relate
> > to it.  Thus the sinlessness of (some variants of) "anthropocentrism".
> > [TOREK]
> Sorry, Paul.  Interestingness or noninterestingness have no bearing on
> reality. ... Your anthropocentrism (and that of others) is hardly 
> "sinless".  [ROSEN]

Rich, you naive realist:  If there are realities which are unknowable and
hence uninteresting, then so much the worse for *them*.  Where is my "sin"
(my mistake)? 

From: barry@ames.UUCP (Kenn Barry)
> ...if I "believe" something, that means I think it's *true* ...
> Now, as far as I can see, the desirability, or lack of it, possessed
> by the notion of "free will", has no bearing on the likelihood of its
> being *true*. So what I get from your argument, is either that "believing"
> something DOESN'T mean thinking it's true, or that the desirability of
> a proposition (like free will) constitutes evidence for its being true.

The former, given what you mean by "*true*".  But first to correct your
terminology here:  it is *not* the desirability of "free will" that is
crucial but the can't-lose nature of *believing* in it.  Furthermore
the "gain" or "loss" involved is *knowledge*:  if you believe in free
will and are right you gain knowledge; if wrong your lack of knowledge
was inevitable anyway so no loss.
	Now on to "*true*".  If truth is construed as "what we ought to
believe", a la William James, then I am saying that "desirability" is
relevant to truth.  Note that James's definition is a *reforming*
definition, not a *reporting* one.  If you reject it in favor of a 
"correspondance theory" of truth, then you face exactly two possibili-
ties:  either all such truths are what we ought to believe, or some 
aren't.  If some aren't, THEN SO MUCH THE WORSE FOR THOSE TRUTHS.
	Look at it this way:  accepting a hypothesis is a *decision*.
There are two things relevant to a decision -- the consequences of the
decision under each possible "way the world is", and the probability
of each of those possible ways the world is.  The decision is clearcut
in case either:  there is only one possible way the world is, given the
evidence, in which case the consequences of each alternative are known
with certainty; or:  one alternative has consequences *at least as good*
in every possible case and better in at least one, in which case it is
superior and the evidence for probabilites is irrelevant.  This second
case applies to free will.

From: rlh@cvl.UUCP (Ralph L. Hartley)
> The problem here is that there are realy more than two choices: believe
> in free will, believe there is no free will, or believe that you don't
> know. Regardless of wether free will exists or not the last choice gets
> you closest to the truth. 

No; we know we have free will because we know we can rationally evaluate
options and act accordingly.  This point is independent from my comments 
above.

> Knowing with any certianty wether one has free will or not is like
> proving within a system that the system is consistent ...

No it's not.  Consistency is not required for free will; I know that at
least one of my [other] beliefs is mistaken (this belief contradicts the 
conjunction of my other beliefs), yet I'm still free.  All that is required
is a modicum of rationality in my beliefs, not perfection.

> If you do have free will then you were free to make the wrong choice
> [in belief about free will].

Maybe I was free to, but I didn't.
					"The adventure continues ..."
				Paul V. Torek, ihnp4!wucs!wucec1!pvt1047
Don't hit that 'r' key!  Send any mail to this address, not the sender's.

kjm@ut-ngp.UUCP (Ken Montgomery) (02/16/85)

[]

esk@wucs.UUCP (Paul V. Torek) writes:
>From: rlr@pyuxd
>> > believing in free will is OBjectively better because it carries no 
>> > penalty (of avoidable error) if mistaken but does carry a benefit 
>> > (avoiding avoidable error) if correct.  [TOREK]
>> Didn't someone else already explain this as analogous to Pascal's reasoning
>> for believing in god?  And didn't that person explain that, objectively and
>> rationally, one doesn't choose beliefs based on their utility, but rather
>> on their correctness?  [RICH ROSEN]
>
>And didn't I explain that that's a false dichotomy?  (How can a belief be 
>"correct" if we ought to believe the opposite???)

The position of the medieval catholic church was that people ought
to believe that the sun orbited the earth.  Was that belief, in fact,
"correct"?

> [ ... ]
>> Sorry, Paul.  Interestingness or noninterestingness have no bearing on
>> reality. ... Your anthropocentrism (and that of others) is hardly 
>> "sinless".  [ROSEN]
>
>Rich, you naive realist:  If there are realities which are unknowable and
>hence uninteresting, then so much the worse for *them*.  Where is my "sin"
>(my mistake)? 

That's like saying, "If I'm brained by a boulder, but it was
unknowable to be because I could not have known what hit me,
then so much the worse for the boulder."  Sour grapes?

> [ ... ]
> it is *not* the desirability of "free will" that is
>crucial but the can't-lose nature of *believing* in it.  Furthermore
>the "gain" or "loss" involved is *knowledge*:  if you believe in free
>will and are right you gain knowledge; if wrong your lack of knowledge
>was inevitable anyway so no loss.

Belief is not the same as knowledge; believing something to be true
does not make it so.  The alleged "can't lose" nature of believing
some proposition does not make that proposition correct.

[here Mr. Torek switches to responding to Kenn Barry]
> [ ... ]
> If you reject it [the theory of truths as what one "ought to believe"]
> in favor of a 
>"correspondance theory" of truth, then you face exactly two possibili-
>ties:  either all such truths are what we ought to believe, or some 
>aren't.  If some aren't, THEN SO MUCH THE WORSE FOR THOSE TRUTHS.

Then (by the claim of the medievals as to what ought to be believed),
we should pretend like the sun really does orbit the earth...  How's
that again?

>        Look at it this way:  accepting a hypothesis is a *decision*.

One's decision does not change reality.  Acceptance of the phlogiston
hypothesis does not make it (empirically) superior to combustion theory.

>   Paul V. Torek, ihnp4!wucs!wucec1!pvt1047

--
The above viewpoints are mine.  They are unrelated to
those of anyone else, including my cats and my employer.

Ken Montgomery  "Shredder-of-hapless-smurfs"
...!{ihnp4,allegra,seismo!ut-sally}!ut-ngp!kjm  [Usenet, when working]
kjm@ut-ngp.ARPA  [for Arpanauts only]

barry@ames.UUCP (Kenn Barry) (02/21/85)

>From: barry@ames.UUCP (Kenn Barry)
>> ...if I "believe" something, that means I think it's *true* ...
>> Now, as far as I can see, the desirability, or lack of it, possessed
>> by the notion of "free will", has no bearing on the likelihood of its
>> being *true*. So what I get from your argument, is either that "believing"
>> something DOESN'T mean thinking it's true, or that the desirability of
>> a proposition (like free will) constitutes evidence for its being true.
>
>The former, given what you mean by "*true*".  But first to correct your
>terminology here:  it is *not* the desirability of "free will" that is
>crucial but the can't-lose nature of *believing* in it.  Furthermore
>the "gain" or "loss" involved is *knowledge*:  if you believe in free
>will and are right you gain knowledge; if wrong your lack of knowledge
>was inevitable anyway so no loss.

	How have I gained knowledge? If I believe in free will for the
reasons you propose, and I happen to be correct, I've made a lucky guess.

>	Now on to "*true*".  If truth is construed as "what we ought to
>believe", a la William James, then I am saying that "desirability" is
>relevant to truth.  Note that James's definition is a *reforming*
>definition, not a *reporting* one.  If you reject it in favor of a 
>"correspondance theory" of truth, then you face exactly two possibili-
>ties:  either all such truths are what we ought to believe, or some 
>aren't.  If some aren't, THEN SO MUCH THE WORSE FOR THOSE TRUTHS.

	Since I would *define* "what we ought to believe" as those things
which are true (correspondence theory), no contradictions can arise.
I still don't see why you want to stretch a good, usable word like "truth"
to include harmless and pleasurable ideas, without regard to their factual
accuracy. There is no "correspondence theory" of truth; it's a definition,
not a theory. Why argue definitions? We both seem to agree that belief
in free will is harmless, ought to be pleasant, and is as likely to be
accurate as determinism is. We also both agree that free will may or
may not be a factually correct description of reality. Or so I gather.
If we do agree on these things, I'm perfectly willing to let you reform
the definition of "truth" to include your justifications for belief in
free will. But can I have some other word, please, to denote only ideas
that correctly describe reality?

>	Look at it this way:  accepting a hypothesis is a *decision*.
>There are two things relevant to a decision -- the consequences of the
>decision under each possible "way the world is", and the probability
>of each of those possible ways the world is.  The decision is clearcut
>in case either:  there is only one possible way the world is, given the
>evidence, in which case the consequences of each alternative are known
>with certainty; or:  one alternative has consequences *at least as good*
>in every possible case and better in at least one, in which case it is
>superior and the evidence for probabilites is irrelevant.  This second
>case applies to free will.

	There is, of course, another possibility: admitting that you
don't have enough evidence to decide the question, and having no opinion.
Your logic still seems like it could be used to justify ridiculous
assumptions. Why shouldn't I believe, for instance, that I will surely
go to Heaven when I die? Wouldn't your logic justify that belief, too?
It's not disprovable, and its consequences are at least as good in every
possible case, and better in one. If you argue that my belief in a certain
heavenly reward might cause my damnation by a vengeful god, I could argue
the very same about your belief in free will.

-  From the Crow's Nest  -                      Kenn Barry
                                                NASA-Ames Research Center
                                                Moffett Field, CA
-------------------------------------------------------------------------------
 	USENET:		 {ihnp4,vortex,dual,hao,menlo70,hplabs}!ames!barry