[net.puzzle] Newcomb's Paradox

weemba@brahms.BERKELEY.EDU (Matthew P. Wiener) (03/21/86)

I am presenting a famous dilemma originally proposed by the physicist
William Newcomb.  I think it can be of interest to the readers of all
the newsgroups I've posted to, because it touches on the nature of faith
and reason, but I am directing all followups to net.puzzle only.

In particular, I'd like to know if it has any bearing on the possibilities
of either perfect precognition or rational decision making.
-------------------------------------------------------------------------
The situation involves a being X.  X is precognizant.  In the first
version of the problem, X is perfect in this power.  If you like, X
is God.  In the second version, X is only partially precognizant, but
has a very very good track record--at least 99% accurate according to
all studies.

X is also very rich and completely honest.  X puts an unknown amount
of money in two boxes A and B.  X tells you that he put $1K (= one
thousand dollars) in box A.  X also tells you that he put either $0
or $1M (= one million dollars) in box B.  You are now given one chance
to earn some quick and easy money.  Your only options are
	(1) the one-boxer option:
		Open box B only.
	(2) the two-boxer option:
		Open box A and B both.
You are not going to be given a second chance nor a third option.  X
furthermore tells you that he put $1M in box B if X predicted you would
follow option (1) only.  He then tells you he put $0 in box B if he
predicted you would follow option (2) only, or if you end up deciding
to use a randomization device (other than, if you wish, your own free
will).

The question is, what do you pick, in either version?  And why?

Let me emphasize, there is no retroactive changing of the contents of
box B.  Either there is a million dollars waiting for you in box B or
there isn't.  There definitely is a thousand dollars waiting for you
in box A.

And if it all seems too simple to you, would it make any difference if
the boxes were transparent?

ucbvax!brahms!weemba	Matthew P Wiener/UCB Math Dept/Berkeley CA 94720

desj@brahms.BERKELEY.EDU (David desJardins) (03/22/86)

   I am keeping this in net.philosophy because that seems much more relevant
to what I have to say (and because I want to see the responses and can't
stand to read net.puzzle); in particular I want to discuss the value of
rational thought.

   I would be willing to believe that with sufficient knowledge of the state
of my brain a sufficiently resourceful opponent could indeed predict with
high probability my response to this situation (given sufficient evidence
to this effect).  I also believe that the rational course of behavior is
to take both boxes.  Certainly this maximizes your expected yield in the
given situation.  But, as I noted, I do expect to get only $1000 this way.

   The only way to "win" this game is to make the (conscious or unconscious)
decision *in advance* to take only the one box.  In fact, no matter what your
choice at the actual event, you are always at least as well off to have been
committed beforehand to take only the one box!
   Nevertheless, to take only one box is by its nature an *irrational*
decision.  Not irrational in terms of results, but irrational when contrasted
with desirable behavior in other circumstances.

   So essentially you have to decide, *in advance*, that you are going to
make an irrational decision in certain circumstances.  This advance decision
is, in itself, rational, since its result can be foreseen to be favorable.
But it represents a compromise.  Such a movement away from rationality has
its own costs in all sorts of other situations.  I personally place such a
high value on rational behavior that I consider the cost to be too great.

   I admit that this something of a cop-out, in that my unwillingness to make
this compromise is largely based on my low estimate of the probability of
such a situation.  Suppose our society gave this test to every individual at
a certain age.  Then the compromise would certainly be worthwhile.  Even the
known existence of such a precogniter might sway me; I don't know.  All I
know is that at the moment I am not willing to make this compromise with
irrationality.

   -- David desJardins

gsmith@brahms.BERKELEY.EDU (Gene Ward Smith) (03/22/86)

In article <12539@ucbvax.BERKELEY.EDU> desj@brahms (David desJardins) writes:

>   I would be willing to believe that with sufficient knowledge of the state
>of my brain a sufficiently resourceful opponent could indeed predict with
>high probability my response to this situation (given sufficient evidence
>to this effect).  I also believe that the rational course of behavior is
>to take both boxes.  Certainly this maximizes your expected yield in the
>given situation.  But, as I noted, I do expect to get only $1000 this way.

   I find it hard to see why if the "irrational" course of behavior
nets $1000000 while the "rational" yields only $1000 you persist in
calling the behavior of opening only one box "irrational".

>   The only way to "win" this game is to make the (conscious or unconscious)
>decision *in advance* to take only the one box.  In fact, no matter what your

>   Nevertheless, to take only one box is by its nature an *irrational*
>decision.  Not irrational in terms of results, but irrational when contrasted
>with desirable behavior in other circumstances.
>
>   So essentially you have to decide, *in advance*, that you are going to
>make an irrational decision in certain circumstances.  This advance decision
>is, in itself, rational, since its result can be foreseen to be favorable.

   I think the irrationality you perceive is not in the behavior of the
box-taker, but in the situation. In spite of saying that you accept it
as conceivable, you appear to be implicitly rejecting it. Fine, except
that this is the premise. The premise *must* be accepted before attempting
to find the rational answer to the problem. The rational answer then is
(by definition, I maintain) the one which gives you the highest return.
This is the *same* definition of rationality which we employ under other,
less peculiar, circumstances.

>But it represents a compromise. Such a movement away from rationality has
>its own costs in all sorts of other situations. I personally place such a
>high value on rational behavior that I consider the cost to be too great.

   Why not say instead that the cost of not employing our usual standard
of rationality even in unusual circumstances might become great? In this
hypothetical case, it has lost you $999,000.

   I think one source of difficulty is the idea "taking the second box
won't change the circumstances; it can't change what is in the boxes
already". But the *premise* says that deciding to take both boxes is
a circumstance which does affect what is in the boxes. You should either
accept the premise, or maintain that it is impossible.

ucbvax!brahms!gsmith    Gene Ward Smith/UCB Math Dept/Berkeley CA 94720
ucbvax!weyl!gsmith       "DUMB problem!! DUMB!!!" -- Robert L. Forward

kort@hounx.UUCP (B.KORT) (03/22/86)

I enjoyed David Desjardin's article in which he confronts the paradox
of rationality.

Along the same lines, consider the consequences of behaving completely
rationally (whatever that means) at all times.  Along comes your
Nefarious Adversary, hell bent on making personal gains at your
expense.  To the extent that Dr. NA can mimic your rational decision
making process, he can anticipate your behavior, and use that foreknowledge
to his own advantage.  Realizing the dangers of behaving in too *predictable*
a fashion, you encorporate a degree of randomness into your strategy.
Now you are beginning to look like a Random Number Generator to Dr. NA.
You have vexed him and defeated his strategy.

Question:  From the outside, do you *appear* to be behaving "rationally"
or "irrationally"?  That is, does rational behavior mean "predictable"
behavior or "successful" behavior, or what?  (Or not?)  Is the appearance
that you present a function of the observer?  (That is, does it depend
on how the observer answers the second question in this paragraph?)

So we return to my opening dilemma.  What *exactly* do we mean by 
completely rational behavior?

--Barry Kort  ...ihnp4!houxn!kort

kort@hounx.UUCP (B.KORT) (03/22/86)

OK, Matthew, here's my strategy for playing the Newcomb game.
I open Box B.  My reasons are as follows.

If the guy's for real (which I doubt), I get my cool $1 Million.
If the guy's a fake, then either I bilk him out of $1 Million
or I prove he's a fake when Box B turns up empty.  As for the
$1000, that's small potatoes.  I wouldn't worry about that when
I have a chance to either profit big or expose a world-class fraud.

--Barry Kort  ...ihnp4!hounx!kort

torek@umich.UUCP (Paul V. Torek ) (03/23/86)

I think this belongs also in net.philosophy (at least, my angle does).

The statement of the puzzle is misleading because it encourages us to make
a single, specific decision.  Actually, this is just one instance of a more
general problem, on which a decision of principle must be made.  The more
general problem is that sometimes, aiming directly at what we value is not
the most effective way to achieve it.  The solution to the general problem
is to adopt a set of dispositions (emotions, decision-rules, etc.) that
will be likely to have the best consequences.

In other words, one should think of the question not as "should I take
one box, or two?" but as "should I be the kind of person who takes one
box, or the kind who takes two?"  But now the answer is obvious.  (Not
quite:  since there are no Perfect Predictors in the real world, and
since it would complicate one's psychology needlessly to become the
kind of person who takes one box, we shouldn't bother.  But the people
who live in that imaginary world with the Perfect Predictor should.)

--Paul Torek						torek@umich

desj@brahms.BERKELEY.EDU (David desJardins) (03/23/86)

In article <12549@ucbvax.BERKELEY.EDU> gsmith@brahms.UUCP (Gene Ward Smith)
writes [in response to my posting]:
>   I think the irrationality you perceive is not in the behavior of the
>box-taker, but in the situation. In spite of saying that you accept it
>as conceivable, you appear to be implicitly rejecting it. Fine, except
>that this is the premise. The premise *must* be accepted before attempting
>to find the rational answer to the problem. The rational answer then is
>(by definition, I maintain) the one which gives you the highest return.
>This is the *same* definition of rationality which we employ under other,
>less peculiar, circumstances.
>
>   I think one source of difficulty is the idea "taking the second box
>won't change the circumstances; it can't change what is in the boxes
>already". But the *premise* says that deciding to take both boxes is
>a circumstance which does affect what is in the boxes. You should either
>accept the premise, or maintain that it is impossible.

   Aha!  The problem is that we are interpreting the problem differently.
Your concept of precognition apparently involves the direct observation of
future events; i.e. the reversal of cause and effect.  If this were the
case of course one would take only one box.

In article <12539@ucbvax.BERKELEY.EDU> desj@brahms (David desJardins) writes:
>   I would be willing to believe that with sufficient knowledge of the state
>of my brain a sufficiently resourceful opponent could indeed predict with
>high probability my response to this situation (given sufficient evidence
>to this effect).

   In other words, the precognition on which I am basing my statements is
essentially a sophisticated modeling process which attempts to predict from
observable data how I will respond to a given situation.
   If I were presented with incontrovertible evidence of successful precog-
nition this is the working assumption I would make about its methodology,
and thus my response to the situation would be based on this assumption.
   Reread my posting and see if it makes any more sense in this context.

   -- David desJardins

weemba@brahms.BERKELEY.EDU (Matthew P. Wiener) (03/23/86)

In article <12569@ucbvax.BERKELEY.EDU> desj@brahms.UUCP (David desJardins) writes:
>In article <12549@ucbvax.BERKELEY.EDU> gsmith@brahms.UUCP (Gene Ward Smith)
>writes [in response to my posting]:
>In article <12539@ucbvax.BERKELEY.EDU> desj@brahms (David desJardins) writes:
>>   I would be willing to believe that with sufficient knowledge of the state
>>of my brain a sufficiently resourceful opponent could indeed predict with
>>high probability my response to this situation (given sufficient evidence
>>to this effect).
>
>   In other words, the precognition on which I am basing my statements is
>essentially a sophisticated modeling process which attempts to predict from
>observable data how I will respond to a given situation.
>   If I were presented with incontrovertible evidence of successful precog-
>nition this is the working assumption I would make about its methodology,
>and thus my response to the situation would be based on this assumption.
>   Reread my posting and see if it makes any more sense in this context.

I would like to thank David for clarifying my original posting: in my
second version, where the being X has a 99+% success rate, I said he
used precognition.  I suppose the best way to distinguish the two cases
is by agreeing to use different words: "precognition" means seeing the
future by some non-predictive "direct" method whereas "prognostication"
means seeing the future by some super sophisticated predictive method.

So I wish to amend my original statement of Newcomb's problem to allow
for three versions:

(1)	X is 100% precognizant.      ( God ? )
(2)	X is 99+% precognizant.      ( Dave Trissel ? )
(3)     X is 99+% prognosticant.     ( Barry Kort ? )

(The concept of 100% prognosticant is empty under the meaning I'm using.)

Furthermore, does it make a difference to you if the odds are lowered to 80% ?
Does it make a difference if you've studied past runs in versions (2), (3) and
noticed which way they err: randomly, always/usually putting in the $1M and
surprising the two-boxers, always/usually putting in $0 and really surprising
the hell out of the one-boxers?

ucbvax!brahms!weemba	Matthew P Wiener/UCB Math Dept/Berkeley CA 94720

js2j@mhuxt.UUCP (sonntag) (03/24/86)

  I've deleted the problem statement here.  Hopefully, everyone's read
a copy of it by now.

> The question is, what do you pick, in either version?  And why?
> 
> And if it all seems too simple to you, would it make any difference if
> the boxes were transparent?
> ucbvax!brahms!weemba	Matthew P Wiener/UCB Math Dept/Berkeley CA 94720

   In the first case, with a perfect precogniter, choosing only box B
nets you a million dollars.  Choosing the two box option gets you $1000.
And, of course, it would make no difference if the boxes were transparent,
I'll still take the million bucks in box B.  If there weren't a million
bucks in box B, I'd still choose it, just so I could show this precognitor
to be a charlatan.
   In the second case, with a 99% accurate precogniter, choosing only
box B yields an expected payoff of $990,000.  Choosing both boxes yields
an expected payoff of $10,990.  With opaque boxes, I'd have to choose just
box B.  With transparent boxes, I'd arrive with a blindfold on, and open
just box B, and hope the precogniter hadn't made a mistake.  It would be
tempting to peek, and choose the other option if there was no megabuck in
B, or to choose the two-box option in order to get the extra kilobuck.
However, if I do this, unless the precognitor was in error, I'm liable
to find only $1000 there, and choose the two-box option, just as predicted.
    It's this kind of paradox that suggests the impossibility of precognition.
If X predicts that I'll open both boxes, and so doesn't put the megabuck in
box B, peekers will see that and choose both boxes.  If X predicts that I'll
open just box B, greedy peekers will pick both boxes, invalidating X's
prediction.  Non-greedy peekers will fullfill the prophesy, of course.
    What does this have to do with faith?  I'm really not sure, but the
non-peekers made out best in this carefully produced example which assumes
the existence of a precogniter.  Maybe it's meant to point out that the
faithful will make out best if there exists a god who has made up certain
arbitrary rules?  I feel compelled to note that they'll do less well than
the peekers if they're wrong about this assumption.
-- 
Jeff Sonntag
ihnp4!mhuxt!js2j

kort@hounx.UUCP (B.KORT) (03/25/86)

I agree with Jeff Sonntag that Newcomb's Paradox suggests that
perfect precognition is impossible.  Another convincing proof
appears in a charming piece by Smullyan entitled Is God Stubborn?
Smullyan sets up a scenario where an omniscient God cannot *reveal*
his prediction of which breakfast cereal the stubbornly willful
and defiant mortal will select.  The mortal has vowed to select
the opposite choice from the prediction.

(Of course, if *I* were doing the prediction, I'd starve the
bastard into submission by predicting that he'd eat at least
one of his choices.)

--Barry Kort  ...ihnp4!hounx!kort

lotto@brahms.BERKELEY.EDU (Ben Lotto) (03/26/86)

In article <754@hounx.UUCP> kort@hounx.UUCP (B.KORT) writes:
>I agree with Jeff Sonntag that Newcomb's Paradox suggests that
>perfect precognition is impossible. . . . 
>. . . an omniscient God cannot *reveal*
>his prediction of which breakfast cereal the stubbornly willful
>and defiant mortal will select.  

All that you have proved here is that perfect precognition is
inconsistent with free will.  If the hypotheses of the puzzle were
ever truly met (with transparent boxes), then the person faced
with the "choice" on decision day would really have no choice;
since, BY HYPOTHESIS, the perfect precognicent is a PERFECT
PRECOGNICENT, the "chooser" WILL make the appropriate selection.

Conclusion: Either free will and perfect precognition are incompatible,
or one must have a great deal of constraints on the perfect precognicent
being on exactly when he is allowed to exercise his power.   (Perhaps
this is why we haven't heard from God in such a long time?)

-Ben   (ucbvax!brahms!lotto / lotto@brahms.Berkeley.EDU)
       (Dept of Mathematics, UC Berkeley, Berkeley, CA  94720)

gsmith@weyl.berkeley.edu.BERKELEY.EDU (Gene Ward Smith) (03/27/86)

In article <12682@ucbvax.BERKELEY.EDU> lotto@brahms.UUCP (Ben Lotto) writes:

>All that you have proved here is that perfect precognition is
>inconsistent with free will.  If the hypotheses of the puzzle were
>ever truly met (with transparent boxes), then the person faced
>with the "choice" on decision day would really have no choice;
>since, BY HYPOTHESIS, the perfect precognicent is a PERFECT
>PRECOGNICENT, the "chooser" WILL make the appropriate selection.

     It seems to me that precognition would still be compatible with
free will in one agent -- the one with precognition. This agent could
figure out what it was going to do by deciding, and then would of course
know. Maybe God is free and we are all robots?

ucbvax!brahms!gsmith    Gene Ward Smith/UCB Math Dept/Berkeley CA 94720
        Fifty flippant frogs / Walked by on flippered feet
    And with their slime they made the time / Unnaturally fleet.

tos@psc70.UUCP (Dr.Schlesinger) (03/28/86)

    When I was a kid back in the 1920's in Europe, radio was fairly
novel, and the following tale went around:
    Brat: Daddy how does the radio work?
    Father: Think of a dog. You pull his tail and he goes "bow-bow".
    Brat: Yes, and then... ?
    Father: Now think of the same thing without the dog.

Now there's your definition of the physicality of the godhead.


Tom Schlesinger
Plymouth State College
Plymouth, N.H. 03264
decvax!dartvax!psc70!psc90!tos